Sample records for average generalization error

  1. Passive quantum error correction of linear optics networks through error averaging

    NASA Astrophysics Data System (ADS)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  2. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  3. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  4. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  5. Spatial Assessment of Model Errors from Four Regression Techniques

    Treesearch

    Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove

    2005-01-01

    Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...

  6. Feasibility of predicting tumor motion using online data acquired during treatment and a generalized neural network optimized with offline patient tumor trajectories.

    PubMed

    Teo, Troy P; Ahmed, Syed Bilal; Kawalec, Philip; Alayoubi, Nadia; Bruce, Neil; Lyn, Ethan; Pistorius, Stephen

    2018-02-01

    The accurate prediction of intrafraction lung tumor motion is required to compensate for system latency in image-guided adaptive radiotherapy systems. The goal of this study was to identify an optimal prediction model that has a short learning period so that prediction and adaptation can commence soon after treatment begins, and requires minimal reoptimization for individual patients. Specifically, the feasibility of predicting tumor position using a combination of a generalized (i.e., averaged) neural network, optimized using historical patient data (i.e., tumor trajectories) obtained offline, coupled with the use of real-time online tumor positions (obtained during treatment delivery) was examined. A 3-layer perceptron neural network was implemented to predict tumor motion for a prediction horizon of 650 ms. A backpropagation algorithm and batch gradient descent approach were used to train the model. Twenty-seven 1-min lung tumor motion samples (selected from a CyberKnife patient dataset) were sampled at a rate of 7.5 Hz (0.133 s) to emulate the frame rate of an electronic portal imaging device (EPID). A sliding temporal window was used to sample the data for learning. The sliding window length was set to be equivalent to the first breathing cycle detected from each trajectory. Performing a parametric sweep, an averaged error surface of mean square errors (MSE) was obtained from the prediction responses of seven trajectories used for the training of the model (Group 1). An optimal input data size and number of hidden neurons were selected to represent the generalized model. To evaluate the prediction performance of the generalized model on unseen data, twenty tumor traces (Group 2) that were not involved in the training of the model were used for the leave-one-out cross-validation purposes. An input data size of 35 samples (4.6 s) and 20 hidden neurons were selected for the generalized neural network. An average sliding window length of 28 data samples was used. The average initial learning period prior to the availability of the first predicted tumor position was 8.53 ± 1.03 s. Average mean absolute error (MAE) of 0.59 ± 0.13 mm and 0.56 ± 0.18 mm were obtained from Groups 1 and 2, respectively, giving an overall MAE of 0.57 ± 0.17 mm. Average root-mean-square-error (RMSE) of 0.67 ± 0.36 for all the traces (0.76 ± 0.34 mm, Group 1 and 0.63 ± 0.36 mm, Group 2), is comparable to previously published results. Prediction errors are mainly due to the irregular periodicities between cycles. Since the errors from Groups 1 and 2 are within the same range, it demonstrates that this model can generalize and predict on unseen data. This is a first attempt to use an averaged MSE error surface (obtained from the prediction of different patients' tumor trajectories) to determine the parameters of a generalized neural network. This network could be deployed as a plug-and-play predictor for tumor trajectory during treatment delivery, eliminating the need for optimizing individual networks with pretreatment patient data. © 2017 American Association of Physicists in Medicine.

  7. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  8. Analyzing average and conditional effects with multigroup multilevel structural equation models

    PubMed Central

    Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf

    2014-01-01

    Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668

  9. Quantification and characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wood, Christopher J.; Gambetta, Jay M.

    2018-03-01

    We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.

  10. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    PubMed

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  11. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  12. Stable estimate of primary OC/EC ratios in the EC tracer method

    NASA Astrophysics Data System (ADS)

    Chu, Shao-Hang

    In fine particulate matter studies, the primary OC/EC ratio plays an important role in estimating the secondary organic aerosol contribution to PM2.5 concentrations using the EC tracer method. In this study, numerical experiments are carried out to test and compare various statistical techniques in the estimation of primary OC/EC ratios. The influence of random measurement errors in both primary OC and EC measurements on the estimation of the expected primary OC/EC ratios is examined. It is found that random measurement errors in EC generally create an underestimation of the slope and an overestimation of the intercept of the ordinary least-squares regression line. The Deming regression analysis performs much better than the ordinary regression, but it tends to overcorrect the problem by slightly overestimating the slope and underestimating the intercept. Averaging the ratios directly is usually undesirable because the average is strongly influenced by unrealistically high values of OC/EC ratios resulting from random measurement errors at low EC concentrations. The errors generally result in a skewed distribution of the OC/EC ratios even if the parent distributions of OC and EC are close to normal. When measured OC contains a significant amount of non-combustion OC Deming regression is a much better tool and should be used to estimate both the primary OC/EC ratio and the non-combustion OC. However, if the non-combustion OC is negligibly small the best and most robust estimator of the OC/EC ratio turns out to be the simple ratio of the OC and EC averages. It not only reduces random errors by averaging individual variables separately but also acts as a weighted average of ratios to minimize the influence of unrealistically high OC/EC ratios created by measurement errors at low EC concentrations. The median of OC/EC ratios ranks a close second, and the geometric mean of ratios ranks third. This is because their estimations are insensitive to questionable extreme values. A real world example is given using the ambient data collected from an Atlanta STN site during the winter of 2001-2002.

  13. Geometrical correction factors for heat flux meters

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1974-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. The local averaging error e(x) is defined as the difference between the measured value of the heat flux and the local value which occurs at the center of the gage. In terms of e(x), a correction procedure is presented which allows a better estimate for the true value of the local heat flux. For many practical problems, it is possible to use relatively large gages to obtain acceptable heat flux measurements.

  14. Effect of gage size on the measurement of local heat flux. [formulas for determining gage averaging errors

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1973-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. In addition, a correction procedure is presented which allows a better estimate for the true value of the local heat flux. As an example of the technique, the formulas are applied to the cases of heat transfer to air slot jets impinging on flat and concave surfaces. It is shown that for many practical problems, the use of very small heat flux gages is often unnecessary.

  15. Eigenvector method for umbrella sampling enables error analysis

    PubMed Central

    Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

    2016-01-01

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

  16. Mathematical foundations of hybrid data assimilation from a synchronization perspective

    NASA Astrophysics Data System (ADS)

    Penny, Stephen G.

    2017-12-01

    The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.

  17. Mathematical foundations of hybrid data assimilation from a synchronization perspective.

    PubMed

    Penny, Stephen G

    2017-12-01

    The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.

  18. AveBoost2: Boosting for Noisy Data

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.

  19. Peak-flow characteristics of Wyoming streams

    USGS Publications Warehouse

    Miller, Kirk A.

    2003-01-01

    Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.

  20. Using Propensity Score Matching Methods to Improve Generalization from Randomized Experiments

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2011-01-01

    The main result of an experiment is typically an estimate of the average treatment effect (ATE) and its standard error. In most experiments, the number of covariates that may be moderators is large. One way this issue is typically skirted is by interpreting the ATE as the average effect for "some" population. Cornfield and Tukey (1956)…

  1. Central Procurement Workload Projection Model

    DTIC Science & Technology

    1981-02-01

    generated by the P&P Directorates such as procurement actions (PA’s) are pursued. Specifi- cally, Box-Jenkins Autoregressive Integrated Moving Average...Breakout of PA’s to over and under $10,000 23 IV. FINDINGS AND RECOMMENDATIONS 24 A. General 24 B. Findings 24 C. Recommendations 25...the model will predict the actual values and hence the error will be zero . Therefore, after forecasting 3 quarters into the future no error

  2. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  3. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

  4. Sensitivity of mesoscale-model forecast skill to some initial-data characteristics, data density, data position, analysis procedure and measurement error

    NASA Technical Reports Server (NTRS)

    Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.

    1989-01-01

    The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.

  5. The Impact of Subsampling on MODIS Level-3 Statistics of Cloud Optical Thickness and Effective Radius

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros

    2004-01-01

    The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.

  6. Climate model biases in seasonality of continental water storage revealed by satellite gravimetry

    USGS Publications Warehouse

    Swenson, Sean; Milly, P.C.D.

    2006-01-01

    Satellite gravimetric observations of monthly changes in continental water storage are compared with outputs from five climate models. All models qualitatively reproduce the global pattern of annual storage amplitude, and the seasonal cycle of global average storage is reproduced well, consistent with earlier studies. However, global average agreements mask systematic model biases in low latitudes. Seasonal extrema of low‐latitude, hemispheric storage generally occur too early in the models, and model‐specific errors in amplitude of the low‐latitude annual variations are substantial. These errors are potentially explicable in terms of neglected or suboptimally parameterized water stores in the land models and precipitation biases in the climate models.

  7. Accelerated Brain DCE-MRI Using Iterative Reconstruction With Total Generalized Variation Penalty for Quantitative Pharmacokinetic Analysis: A Feasibility Study.

    PubMed

    Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng

    2017-08-01

    To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.

  8. Accuracy of the Generalized Self-Consistent Method in Modelling the Elastic Behaviour of Periodic Composites

    NASA Technical Reports Server (NTRS)

    Walker, Kevin P.; Freed, Alan D.; Jordan, Eric H.

    1993-01-01

    Local stress and strain fields in the unit cell of an infinite, two-dimensional, periodic fibrous lattice have been determined by an integral equation approach. The effect of the fibres is assimilated to an infinite two-dimensional array of fictitious body forces in the matrix constituent phase of the unit cell. By subtracting a volume averaged strain polarization term from the integral equation we effectively embed a finite number of unit cells in a homogenized medium in which the overall stress and strain correspond to the volume averaged stress and strain of the constrained unit cell. This paper demonstrates that the zeroth term in the governing integral equation expansion, which embeds one unit cell in the homogenized medium, corresponds to the generalized self-consistent approximation. By comparing the zeroth term approximation with higher order approximations to the integral equation summation, both the accuracy of the generalized self-consistent composite model and the rate of convergence of the integral summation can be assessed. Two example composites are studied. For a tungsten/copper elastic fibrous composite the generalized self-consistent model is shown to provide accurate, effective, elastic moduli and local field representations. The local elastic transverse stress field within the representative volume element of the generalized self-consistent method is shown to be in error by much larger amounts for a composite with periodically distributed voids, but homogenization leads to a cancelling of errors, and the effective transverse Young's modulus of the voided composite is shown to be in error by only 23% at a void volume fraction of 75%.

  9. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  10. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

  11. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  12. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  13. An improved portmanteau test for autocorrelated errors in interrupted time-series regression models.

    PubMed

    Huitema, Bradley E; McKean, Joseph W

    2007-08-01

    A new portmanteau test for autocorrelation among the errors of interrupted time-series regression models is proposed. Simulation results demonstrate that the inferential properties of the proposed Q(H-M) test statistic are considerably more satisfactory than those of the well known Ljung-Box test and moderately better than those of the Box-Pierce test. These conclusions generally hold for a wide variety of autoregressive (AR), moving averages (MA), and ARMA error processes that are associated with time-series regression models of the form described in Huitema and McKean (2000a, 2000b).

  14. On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters

    PubMed Central

    van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian

    2017-01-01

    The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127

  15. Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data

    NASA Technical Reports Server (NTRS)

    Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen

    2006-01-01

    This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.

  16. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less

  17. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  18. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  19. A SEASAT SASS simulation experiment to quantify the errors related to a + or - 3 hour intermittent assimilation technique

    NASA Technical Reports Server (NTRS)

    Sylvester, W. B.

    1984-01-01

    A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.

  20. Derivation and precision of mean field electrodynamics with mesoscale fluctuations

    NASA Astrophysics Data System (ADS)

    Zhou, Hongzhe; Blackman, Eric G.

    2018-06-01

    Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.

  1. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  2. Is ozone model bias driven by errors in cloud predictions? A quantitative assessment using satellite cloud retrievals in WRF-Chem

    NASA Astrophysics Data System (ADS)

    Ryu, Y. H.; Hodzic, A.; Barré, J.; Descombes, G.; Minnis, P.

    2017-12-01

    Clouds play a key role in radiation and hence O3 photochemistry by modulating photolysis rates and light-dependent emissions of biogenic volatile organic compounds (BVOCs). It is not well known, however, how much of the bias in O3 predictions is caused by inaccurate cloud predictions. This study quantifies the errors in surface O3 predictions associated with clouds in summertime over CONUS using the Weather Research and Forecasting with Chemistry (WRF-Chem) model. Cloud fields used for photochemistry are corrected based on satellite cloud retrievals in sensitivity simulations. It is found that the WRF-Chem model is able to detect about 60% of clouds in the right locations and generally underpredicts cloud optical depths. The errors in hourly O3 due to the errors in cloud predictions can be up to 60 ppb. On average in summertime over CONUS, the errors in 8-h average O3 of 1-6 ppb are found to be attributable to those in cloud predictions under cloudy sky conditions. The contribution of changes in photolysis rates due to clouds is found to be larger ( 80 % on average) than that of light-dependent BVOC emissions. The effects of cloud corrections on O­3 are about 2 times larger in VOC-limited than NOx-limited regimes, suggesting that the benefits of accurate cloud predictions would be greater in VOC-limited than NOx-limited regimes.

  3. Corrigenda of 'explicit wave-averaged primitive equations using a generalized Lagrangian Mean'

    NASA Astrophysics Data System (ADS)

    Ardhuin, F.; Rascle, N.; Belibassakis, K. A.

    2017-05-01

    Ardhuin et al. (2008) gave a second-order approximation in the wave slope of the exact Generalized Lagrangian Mean (GLM) equations derived by Andrews and McIntyre (1978), and also performed a coordinate transformation, going from GLM to a 'GLMz' set of equations. That latter step removed the wandering of the GLM mean sea level away from the Eulerian-mean sea level, making the GLMz flow non-divergent. That step contained some inaccuarate statements about the coordinate transformation, while the rest of the paper contained an error on the surface dynamic boundary condition for viscous stresses. I am thankful to Mathias Delpey and Hidenori Aiki for pointing out these errors, which are corrected below.

  4. Alignment error envelopes for single particle analysis.

    PubMed

    Jensen, G J

    2001-01-01

    To determine the structure of a biological particle to high resolution by electron microscopy, image averaging is required to combine information from different views and to increase the signal-to-noise ratio. Starting from the number of noiseless views necessary to resolve features of a given size, four general factors are considered that increase the number of images actually needed: (1) the physics of electron scattering introduces shot noise, (2) thermal motion and particle inhomogeneity cause the scattered electrons to describe a mixture of structures, (3) the microscope system fails to usefully record all the information carried by the scattered electrons, and (4) image misalignment leads to information loss through incoherent averaging. The compound effect of factors 2-4 is approximated by the product of envelope functions. The problem of incoherent image averaging is developed in detail through derivation of five envelope functions that account for small errors in 11 "alignment" parameters describing particle location, orientation, defocus, magnification, and beam tilt. The analysis provides target error tolerances for single particle analysis to near-atomic (3.5 A) resolution, and this prospect is shown to depend critically on image quality, defocus determination, and microscope alignment. Copyright 2001 Academic Press.

  5. General surgery residency program websites: usefulness and usability for resident applicants.

    PubMed

    Reilly, Eugene F; Leibrandt, Thomas J; Zonno, Alan J; Simpson, Mary Christina; Morris, Jon B

    2004-01-01

    To assess the content of general surgery residency program websites, the websites' potential as tools in resident recruitment, and their "usability." The homepages of general surgery residency programs were evaluated for accessibility, ease-of-use, adherence to established principles of website design, and content. Investigators completed a questionnaire on aspects of their online search, including number of mouse-clicks used, number of errors encountered, and number of returns to the residency homepage. The World Wide Web listings on the Fellowship and Residency Electronic Interactive Database (FREIDA) of the American Medical Association (AMA). A total of 251 ACGME-accredited general surgery residency programs. One hundred sixty-seven programs (67%) provided a viable link to the program's website. Evaluators found an average of 5.9 of 16 content items; 2 (1.2%) websites provided as many as 12 content items. Five of the 16 content items (program description, conference schedules, listing of faculty, caseload, and salary) were found on more than half of the sites. An average of 24 mouse-clicks was required to complete the questionnaire for each site. Forty-six sites (28%) generated at least 1 error during our search. The residency homepage was revisited an average of 5 times during each search. On average, programs adhered to 6 of the 10 design principles; only 6 (3.6%) sites adhered to all 10 design principles. Two of the 10 design principles (use of familiar fonts, absence of frames) were adhered to in more than half of the sites. Our overall success rate when searching residency websites was 38%. General surgery residency programs do not use the World Wide Web optimally, particularly for users who are potential residency candidates. The usability of these websites could be increased by providing relevant content, making that content easier to find, and adhering to established web design principles.

  6. Toward developing a standardized Arabic continuous text reading chart.

    PubMed

    Alabdulkader, Balsam; Leat, Susan Jennifer

    Near visual acuity is an essential measurement during an oculo-visual assessment. Short duration continuous text reading charts measure reading acuity and other aspects of reading performance. There is no standardized version of such chart in Arabic. The aim of this study is to create sentences of equal readability to use in the development of a standardized Arabic continuous text reading chart. Initially, 109 Arabic pairs of sentences were created for use in constructing a chart with similar layout to the Colenbrander chart. They were created to have the same grade level of difficulty and physical length. Fifty-three adults and sixteen children were recruited to validate the sentences. Reading speed in correct words per minute (CWPM) and standard length words per minute (SLWPM) was measured and errors were counted. Criteria based on reading speed and errors made in each sentence pair were used to exclude sentence pairs with more outlying characteristics, and to select the final group of sentence pairs. Forty-five sentence pairs were selected according to the elimination criteria. For adults, the average reading speed for the final sentences was 166 CWPM and 187 SLWPM and the average number of errors per sentence pair was 0.21. Childrens' average reading speed for the final group of sentences was 61 CWPM and 72 SLWPM. Their average error rate was 1.71. The reliability analysis showed that the final 45 sentence pairs are highly comparable. They will be used in constructing an Arabic short duration continuous text reading chart. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  7. Causal Inference for fMRI Time Series Data with Systematic Errors of Measurement in a Balanced On/Off Study of Social Evaluative Threat.

    PubMed

    Sobel, Michael E; Lindquist, Martin A

    2014-07-01

    Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.

  8. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  9. Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1994-01-01

    Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

  10. Estimation of selected streamflow statistics for a network of low-flow partial-record stations in areas affected by Base Realignment and Closure (BRAC) in Maryland

    USGS Publications Warehouse

    Ries, Kernell G.; Eng, Ken

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima

  11. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, S; Hong, C; Kim, M

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed withoutmore » the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.« less

  12. Estimating the densities of benzene-derived explosives using atomic volumes.

    PubMed

    Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka

    2018-02-09

    The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.

  13. Medium-Range Forecast Skill for Extraordinary Arctic Cyclones in Summer of 2008-2016

    NASA Astrophysics Data System (ADS)

    Yamagami, Akio; Matsueda, Mio; Tanaka, Hiroshi L.

    2018-05-01

    Arctic cyclones (ACs) are a severe atmospheric phenomenon that affects the Arctic environment. This study assesses the forecast skill of five leading operational medium-range ensemble forecasts for 10 extraordinary ACs that occurred in summer during 2008-2016. Average existence probability of the predicted ACs was >0.9 at lead times of ≤3.5 days. Average central position error of the predicted ACs was less than half of the mean radius of the 10 ACs (469.1 km) at lead times of 2.5-4.5 days. Average central pressure error of the predicted ACs was 5.5-10.7 hPa at such lead times. Therefore, the operational ensemble prediction systems generally predict the position of ACs within 469.1 km 2.5-4.5 days before they mature. The forecast skill for the extraordinary ACs is lower than that for midlatitude cyclones in the Northern Hemisphere but similar to that in the Southern Hemisphere.

  14. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  15. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  16. Category specific dysnomia after thalamic infarction: a case-control study.

    PubMed

    Levin, Netta; Ben-Hur, Tamir; Biran, Iftah; Wertman, Eli

    2005-01-01

    Category specific naming impairment was described mainly after cortical lesions. It is thought to result from a lesion in a specific network, reflecting the organization of our semantic knowledge. The deficit usually involves multiple semantic categories whose profile of naming deficit generally obeys the animate/inanimate dichotomy. Thalamic lesions cause general semantic naming deficit, and only rarely a category specific semantic deficit for very limited and highly specific categories. We performed a case-control study on a 56-year-old right-handed man who presented with language impairment following a left anterior thalamic infarction. His naming ability and semantic knowledge were evaluated in the visual, tactile and auditory modalities for stimuli from 11 different categories, and compared to that of five controls. In naming to visual stimuli the patient performed poorly (error rate>50%) in four categories: vegetables, toys, animals and body parts (average 70.31+/-15%). In each category there was a different dominating error type. He performed better in the other seven categories (tools, clothes, transportation, fruits, electric, furniture, kitchen utensils), averaging 14.28+/-9% errors. Further analysis revealed a dichotomy between naming in animate and inanimate categories in the visual and tactile modalities but not in response to auditory stimuli. Thus, a unique category specific profile of response and naming errors to visual and tactile, but not auditory stimuli was found after a left anterior thalamic infarction. This might reflect the role of the thalamus not only as a relay station but further as a central integrator of different stages of perceptual and semantic processing.

  17. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.

    PubMed

    Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L

    2011-10-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.

  18. Emergency Department Visit Forecasting and Dynamic Nursing Staff Allocation Using Machine Learning Techniques With Readily Available Open-Source Software.

    PubMed

    Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel

    2015-08-01

    Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.

  19. Sci-Thur AM: YIS – 05: Prediction of lung tumor motion using a generalized neural network optimized from the average prediction outcome of a group of patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teo, Troy; Alayoubi, Nadia; Bruce, Neil

    Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons formore » the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.« less

  20. Use of scan overlap redundancy to enhance multispectral aircraft scanner data

    NASA Technical Reports Server (NTRS)

    Lindenlaub, J. C.; Keat, J.

    1973-01-01

    Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.

  1. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  2. The norms and variances of the Gabor, Morlet and general harmonic wavelet functions

    NASA Astrophysics Data System (ADS)

    Simonovski, I.; Boltežar, M.

    2003-07-01

    This paper deals with certain properties of the continuous wavelet transform and wavelet functions. The norms and the spreads in time and frequency of the common Gabor and Morlet wavelet functions are presented. It is shown that the norm of the Morlet wavelet function does not satisfy the normalization condition and that the normalized Morlet wavelet function is identical to the Gabor wavelet function with the parameter σ=1. The general harmonic wavelet function is developed using frequency modulation of the Hanning and Hamming window functions. Several properties of the general harmonic wavelet function are also presented and compared to the Gabor wavelet function. The time and frequency spreads of the general harmonic wavelet function are only slightly higher than the time and frequency spreads of the Gabor wavelet function. However, the general harmonic wavelet function is simpler to use than the Gabor wavelet function. In addition, the general harmonic wavelet function can be constructed in such a way that the zero average condition is truly satisfied. The average value of the Gabor wavelet function can approach a value of zero but it cannot reach it. When calculating the continuous wavelet transform, errors occur at the start- and the end-time indexes. This is called the edge effect and is caused by the fact that the wavelet transform is calculated from a signal of finite length. In this paper, we propose a method that uses signal mirroring to reduce the errors caused by the edge effect. The success of the proposed method is demonstrated by using a simulated signal.

  3. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  4. Average BER of subcarrier intensity modulated free space optical systems over the exponentiated Weibull fading channels.

    PubMed

    Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang

    2014-08-25

    The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.

  5. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  6. Measurement uncertainty and feasibility study of a flush airdata system for a hypersonic flight experiment

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.

    1994-01-01

    Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.

  7. High-order noise filtering in nontrivial quantum logic gates.

    PubMed

    Green, Todd; Uys, Hermann; Biercuk, Michael J

    2012-07-13

    Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.

  8. Analysis of the Magnitude and Frequency of Peak Discharges for the Navajo Nation in Arizona, Utah, Colorado, and New Mexico

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2006-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.

  9. Tropical forecasting - Predictability perspective

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1989-01-01

    Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.

  10. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders.

    PubMed

    Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise

    2013-05-01

    To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.

  11. Clinical vision characteristics of the congenital achromatopsias. I. Visual acuity, refractive error, and binocular status.

    PubMed

    Haegerstrom-Portnoy, G; Schneck, M E; Verdon, W A; Hewlett, S E

    1996-07-01

    Visual acuity, refractive error, and binocular status were determined in 43 autosomal recessive (AR) and 15 X-linked (XL) congenital achromats. The achromats were classified by color matching and spectral sensitivity data. Large interindividual variation in refractive error and visual acuity was present within each achromat group (complete AR, incomplete AR, and XL). However, the number of individuals with significant interocular acuity differences is very small. Most XLs are myopic; ARs show a wide range of refractive error from high myopia to high hyperopia. Acuity of the AR and XL groups was very similar. With-the-rule astigmatism of large amount is very common in achromats, particularly ARs. There is a close association between strabismus and interocular acuity differences in the ARs, with the fixating eye having better than average acuity. The large overlap of acuity and refractive error of XL and AR achromats suggests that these measures are less useful for differential diagnosis than generally indicated by the clinical literature.

  12. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA) and Generalized Regression Neural Network (GRNN) in Forecasting Hepatitis Incidence in Heng County, China

    PubMed Central

    Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui

    2016-01-01

    Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555

  13. A Lagrangian stochastic model for aerial spray transport above an oak forest

    USGS Publications Warehouse

    Wang, Yansen; Miller, David R.; Anderson, Dean E.; McManus, Michael L.

    1995-01-01

    An aerial spray droplets' transport model has been developed by applying recent advances in Lagrangian stochastic simulation of heavy particles. A two-dimensional Lagrangian stochastic model was adopted to simulate the spray droplet dispersion in atmospheric turbulence by adjusting the Lagrangian integral time scale along the drop trajectory. The other major physical processes affecting the transport of spray droplets above a forest canopy, the aircraft wingtip vortices and the droplet evaporation, were also included in each time step of the droplets' transport.The model was evaluated using data from an aerial spray field experiment. In generally neutral stability conditions, the accuracy of the model predictions varied from run-to-run as expected. The average root-mean-square error was 24.61 IU cm−2, and the average relative error was 15%. The model prediction was adequate in two-dimensional steady wind conditions, but was less accurate in variable wind condition. The results indicated that the model can simulate successfully the ensemble; average transport of aerial spray droplets under neutral, steady atmospheric wind conditions.

  14. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  15. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  16. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE PAGES

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...

    2017-08-01

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  17. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders

    PubMed Central

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2012-01-01

    Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137

  18. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws

    USGS Publications Warehouse

    Xiao, X.; White, E.P.; Hooten, M.B.; Durham, S.L.

    2011-01-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain. ?? 2011 by the Ecological Society of America.

  19. Conditions that influence the accuracy of anthropometric parameter estimation for human body segments using shape-from-silhouette

    NASA Astrophysics Data System (ADS)

    Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.

    2005-01-01

    Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).

  20. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  1. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  3. Measurement of the Errors of Service Altimeter Installations During Landing-Approach and Take-Off Operations

    NASA Technical Reports Server (NTRS)

    Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.

    1960-01-01

    The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.

  4. Dosimetry audit simulation of treatment planning system in multicenters radiotherapy

    NASA Astrophysics Data System (ADS)

    Kasmuri, S.; Pawiro, S. A.

    2017-07-01

    Treatment Planning System (TPS) is an important modality that determines radiotherapy outcome. TPS requires input data obtained through commissioning and the potentially error occurred. Error in this stage may result in the systematic error. The aim of this study to verify the TPS dosimetry to know deviation range between calculated and measurement dose. This study used CIRS phantom 002LFC representing the human thorax and simulated all external beam radiotherapy stages. The phantom was scanned using CT Scanner and planned 8 test cases that were similar to those in clinical practice situation were made, tested in four radiotherapy centers. Dose measurement using 0.6 cc ionization chamber. The results of this study showed that generally, deviation of all test cases in four centers was within agreement criteria with average deviation about -0.17±1.59 %, -1.64±1.92 %, 0.34±1.34 % and 0.13±1.81 %. The conclusion of this study was all TPS involved in this study showed good performance. The superposition algorithm showed rather poor performance than either analytic anisotropic algorithm (AAA) and convolution algorithm with average deviation about -1.64±1.92 %, -0.17±1.59 % and -0.27±1.51 % respectively.

  5. Numerical artifacts in the Generalized Porous Medium Equation: Why harmonic averaging itself is not to blame

    NASA Astrophysics Data System (ADS)

    Maddix, Danielle C.; Sampaio, Luiz; Gerritsen, Margot

    2018-05-01

    The degenerate parabolic Generalized Porous Medium Equation (GPME) poses numerical challenges due to self-sharpening and its sharp corner solutions. For these problems, we show results for two subclasses of the GPME with differentiable k (p) with respect to p, namely the Porous Medium Equation (PME) and the superslow diffusion equation. Spurious temporal oscillations, and nonphysical locking and lagging have been reported in the literature. These issues have been attributed to harmonic averaging of the coefficient k (p) for small p, and arithmetic averaging has been suggested as an alternative. We show that harmonic averaging is not solely responsible and that an improved discretization can mitigate these issues. Here, we investigate the causes of these numerical artifacts using modified equation analysis. The modified equation framework can be used for any type of discretization. We show results for the second order finite volume method. The observed problems with harmonic averaging can be traced to two leading error terms in its modified equation. This is also illustrated numerically through a Modified Harmonic Method (MHM) that can locally modify the critical terms to remove the aforementioned numerical artifacts.

  6. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    USGS Publications Warehouse

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  7. Assessment of Computational Fluid Dynamics (CFD) Models for Shock Boundary-Layer Interaction

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.; Oberkampf, William L.; Wolf, Richard T.; Orkwis, Paul D.; Turner, Mark G.; Babinsky, Holger

    2011-01-01

    A workshop on the computational fluid dynamics (CFD) prediction of shock boundary-layer interactions (SBLIs) was held at the 48th AIAA Aerospace Sciences Meeting. As part of the workshop numerous CFD analysts submitted solutions to four experimentally measured SBLIs. This paper describes the assessment of the CFD predictions. The assessment includes an uncertainty analysis of the experimental data, the definition of an error metric and the application of that metric to the CFD solutions. The CFD solutions provided very similar levels of error and in general it was difficult to discern clear trends in the data. For the Reynolds Averaged Navier-Stokes methods the choice of turbulence model appeared to be the largest factor in solution accuracy. Large-eddy simulation methods produced error levels similar to RANS methods but provided superior predictions of normal stresses.

  8. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  9. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  10. The successively temporal error concealment algorithm using error-adaptive block matching principle

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun

    2014-09-01

    Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.

  11. Residue frequencies and pairing preferences at protein-protein interfaces.

    PubMed

    Glaser, F; Steinberg, D M; Vakser, I A; Ben-Tal, N

    2001-05-01

    We used a nonredundant set of 621 protein-protein interfaces of known high-resolution structure to derive residue composition and residue-residue contact preferences. The residue composition at the interfaces, in entire proteins and in whole genomes correlates well, indicating the statistical strength of the data set. Differences between amino acid distributions were observed for interfaces with buried surface area of less than 1,000 A(2) versus interfaces with area of more than 5,000 A(2). Hydrophobic residues were abundant in large interfaces while polar residues were more abundant in small interfaces. The largest residue-residue preferences at the interface were recorded for interactions between pairs of large hydrophobic residues, such as Trp and Leu, and the smallest preferences for pairs of small residues, such as Gly and Ala. On average, contacts between pairs of hydrophobic and polar residues were unfavorable, and the charged residues tended to pair subject to charge complementarity, in agreement with previous reports. A bootstrap procedure, lacking from previous studies, was used for error estimation. It showed that the statistical errors in the set of pairing preferences are generally small; the average standard error is approximately 0.2, i.e., about 8% of the average value of the pairwise index (2.9). However, for a few pairs (e.g., Ser-Ser and Glu-Asp) the standard error is larger in magnitude than the pairing index, which makes it impossible to tell whether contact formation is favorable or unfavorable. The results are interpreted using physicochemical factors and their implications for the energetics of complex formation and for protein docking are discussed. Proteins 2001;43:89-102. Copyright 2001 Wiley-Liss, Inc.

  12. Wind scatterometry with improved ambiguity selection and rain modeling

    NASA Astrophysics Data System (ADS)

    Draper, David Willis

    Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous wind/rain (SWR) estimation procedure can improve wind estimates during rain, while providing a scatterometer-based rain rate estimate. SWR also affords improved rain flagging for low to moderate rain rates. QuikSCAT-retrieved rain rates correlate well with TRMM PR instantaneous measurements and TMI monthly rain averages. SeaWinds rain measurements can be used to supplement data from other rain-measuring instruments, filling spatial and temporal gaps in coverage.

  13. Linking models and data on vegetation structure

    NASA Astrophysics Data System (ADS)

    Hurtt, G. C.; Fisk, J.; Thomas, R. Q.; Dubayah, R.; Moorcroft, P. R.; Shugart, H. H.

    2010-06-01

    For more than a century, scientists have recognized the importance of vegetation structure in understanding forest dynamics. Now future satellite missions such as Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) hold the potential to provide unprecedented global data on vegetation structure needed to reduce uncertainties in terrestrial carbon dynamics. Here, we briefly review the uses of data on vegetation structure in ecosystem models, develop and analyze theoretical models to quantify model-data requirements, and describe recent progress using a mechanistic modeling approach utilizing a formal scaling method and data on vegetation structure to improve model predictions. Generally, both limited sampling and coarse resolution averaging lead to model initialization error, which in turn is propagated in subsequent model prediction uncertainty and error. In cases with representative sampling, sufficient resolution, and linear dynamics, errors in initialization tend to compensate at larger spatial scales. However, with inadequate sampling, overly coarse resolution data or models, and nonlinear dynamics, errors in initialization lead to prediction error. A robust model-data framework will require both models and data on vegetation structure sufficient to resolve important environmental gradients and tree-level heterogeneity in forest structure globally.

  14. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  15. Stereo pair design for cameras with a fovea

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  16. Long term analysis of the biomass content in the feed of a waste-to-energy plant with oxygen-enriched combustion air.

    PubMed

    Fellner, Johann; Cencic, Oliver; Zellinger, Günter; Rechberger, Helmut

    2011-10-01

    Thermal utilization of municipal solid waste and commercial wastes has become of increasing importance in European waste management. As waste materials are generally composed of fossil and biogenic materials, a part of the energy generated can be considered as renewable and is thus subsidized in some European countries. Analogously, CO(2) emissions of waste incinerators are only partly accounted for in greenhouse gas inventories. A novel approach for determining these fractions is the so-called balance method. In the present study, the implementation of the balance method on a waste-to-energy plant using oxygen-enriched combustion air was investigated. The findings of the 4-year application indicate on the one hand the general applicability and robustness of the method, and on the other hand the importance of reliable monitoring data. In particular, measured volume flows of the flue gas and the oxygen-enriched combustion air as well as corresponding O(2) and CO(2) contents should regularly be validated. The fraction of renewable (biogenic) energy generated throughout the investigated period amounted to between 27 and 66% for weekly averages, thereby denoting the variation in waste composition over time. The average emission factor of the plant was approximately 45 g CO(2) MJ(-1) energy input or 450 g CO(2) kg(-1) waste incinerated. The maximum error of the final result was about 16% (relative error), which was well above the error (<8%) of the balance method for plants with conventional oxygen supply.

  17. Peak-flow frequency relations and evaluation of the peak-flow gaging network in Nebraska

    USGS Publications Warehouse

    Soenksen, Philip J.; Miller, Lisa D.; Sharpe, Jennifer B.; Watton, Jason R.

    1999-01-01

    Estimates of peak-flow magnitude and frequency are required for the efficient design of structures that convey flood flows or occupy floodways, such as bridges, culverts, and roads. The U.S. Geological Survey, in cooperation with the Nebraska Department of Roads, conducted a study to update peak-flow frequency analyses for selected streamflow-gaging stations, develop a new set of peak-flow frequency relations for ungaged streams, and evaluate the peak-flow gaging-station network for Nebraska. Data from stations located in or within about 50 miles of Nebraska were analyzed using guidelines of the Interagency Advisory Committee on Water Data in Bulletin 17B. New generalized skew relations were developed for use in frequency analyses of unregulated streams. Thirty-three drainage-basin characteristics related to morphology, soils, and precipitation were quantified using a geographic information system, related computer programs, and digital spatial data.For unregulated streams, eight sets of regional regression equations relating drainage-basin to peak-flow characteristics were developed for seven regions of the state using a generalized least squares procedure. Two sets of regional peak-flow frequency equations were developed for basins with average soil permeability greater than 4 inches per hour, and six sets of equations were developed for specific geographic areas, usually based on drainage-basin boundaries. Standard errors of estimate for the 100-year frequency equations (1percent probability) ranged from 12.1 to 63.8 percent. For regulated reaches of nine streams, graphs of peak flow for standard frequencies and distance upstream of the mouth were estimated.The regional networks of streamflow-gaging stations on unregulated streams were analyzed to evaluate how additional data might affect the average sampling errors of the newly developed peak-flow equations for the 100-year frequency occurrence. Results indicated that data from new stations, rather than more data from existing stations, probably would produce the greatest reduction in average sampling errors of the equations.

  18. The Constitutive Modeling of Thin Films with Randon Material Wrinkles

    NASA Technical Reports Server (NTRS)

    Murphey, Thomas W.; Mikulas, Martin M.

    2001-01-01

    Material wrinkles drastically alter the structural constitutive properties of thin films. Normally linear elastic materials, when wrinkled, become highly nonlinear and initially inelastic. Stiffness' reduced by 99% and negative Poisson's ratios are typically observed. This paper presents an effective continuum constitutive model for the elastic effects of material wrinkles in thin films. The model considers general two-dimensional stress and strain states (simultaneous bi-axial and shear stress/strain) and neglects out of plane bending. The constitutive model is derived from a traditional mechanics analysis of an idealized physical model of random material wrinkles. Model parameters are the directly measurable wrinkle characteristics of amplitude and wavelength. For these reasons, the equations are mechanistic and deterministic. The model is compared with bi-axial tensile test data for wrinkled Kaptong(Registered Trademark) HN and is shown to deterministically predict strain as a function of stress with an average RMS error of 22%. On average, fitting the model to test data yields an RMS error of 1.2%

  19. Characterization of impulse noise and analysis of its effect upon correlation receivers

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Moore, J. D.

    1971-01-01

    A noise model is formulated to describe the impulse noise in many digital systems. A simplified model, which assumes that each noise burst contains a randomly weighted version of the same basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. A procedure is established for extending the results for the simplified noise model to the general model. Unlike the performance results for Gaussian noise, it is shown that for impulse noise the error performance is affected by the choice of signal-set basis functions and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy.

  20. A method of predicting flow rates required to achieve anti-icing performance with a porous leading edge ice protection system

    NASA Technical Reports Server (NTRS)

    Kohlman, D. L.; Albright, A. E.

    1983-01-01

    An analytical method was developed for predicting minimum flow rates required to provide anti-ice protection with a porous leading edge fluid ice protection system. The predicted flow rates compare with an average error of less than 10 percent to six experimentally determined flow rates from tests in the NASA Icing Research Tunnel on a general aviation wing section.

  1. Regional Carbon Dioxide and Water Vapor Exchange Over Heterogeneous Terrain

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry J.

    2005-01-01

    In spite of setbacks due to forest fires, eviction after a change of landowners and unanticipated need to upgrade and replace much of the instrumentation, substantial progress has been made during the past three years, resulting in major new findings. Although most of the results are in manuscript form, three papers have been published and a fourth was recently submitted. The data has been subjected to extensive quality control. Extra attention has been devoted to the influence of tilt rotation and flux-calculation method, particularly with respect to nocturnal fluxes. Previous/standard methods for calculating nocturnal fluxes with moderate and strong stability are inadequate and lead to large random fluxes errors for individual records, due partly to inadvertent inclusion of mesoscale motions that strongly contaminant the estimation of fluxes by weak turbulence. Such large errors are serious for process studies requiring carbon dioxide fluxes for individual records, but are substantially reduced when averaging fluxes over longer periods as in calculation of annual NEE budgets. We have employed a superior method for estimating fluxes in stable conditions with a variable averaging width . Mesoscale fluxes are generally unimportant except for events and are generally not systematic or predictable. Mesoscale or regional models of our region are not able to reproduce important aspects of the diurnally varying wind field

  2. The role of global cloud climatologies in validating numerical models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1993-01-01

    The purpose of this work is to estimate sampling errors of area-time averaged rain rate due to temporal samplings by satellites. In particular, the sampling errors of the proposed low inclination orbit satellite of the Tropical Rainfall Measuring Mission (TRMM) (35 deg inclination and 350 km altitude), one of the sun synchronous polar orbiting satellites of NOAA series (98.89 deg inclination and 833 km altitude), and two simultaneous sun synchronous polar orbiting satellites--assumed to carry a perfect passive microwave sensor for direct rainfall measurements--will be estimated. This estimate is done by performing a study of the satellite orbits and the autocovariance function of the area-averaged rain rate time series. A model based on an exponential fit of the autocovariance function is used for actual calculations. Varying visiting intervals and total coverage of averaging area on each visit by the satellites are taken into account in the model. The data are generated by a General Circulation Model (GCM). The model has a diurnal cycle and parameterized convective processes. A special run of the GCM was made at NASA/GSFC in which the rainfall and precipitable water fields were retained globally for every hour of the run for the whole year.

  3. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  4. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  5. Optimal estimation of suspended-sediment concentrations in streams

    USGS Publications Warehouse

    Holtschlag, D.J.

    2001-01-01

    Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.

  6. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  7. Five-equation and robust three-equation methods for solution verification of large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dutta, Rabijit; Xing, Tao

    2018-02-01

    This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.

  8. What errors do peer reviewers detect, and does training improve their ability to detect them?

    PubMed

    Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard

    2008-10-01

    To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.

  9. Cluster-state quantum computing enhanced by high-fidelity generalized measurements.

    PubMed

    Biggerstaff, D N; Kaltenbaek, R; Hamel, D R; Weihs, G; Rudolph, T; Resch, K J

    2009-12-11

    We introduce and implement a technique to extend the quantum computational power of cluster states by replacing some projective measurements with generalized quantum measurements (POVMs). As an experimental demonstration we fully realize an arbitrary three-qubit cluster computation by implementing a tunable linear-optical POVM, as well as fast active feedforward, on a two-qubit photonic cluster state. Over 206 different computations, the average output fidelity is 0.9832+/-0.0002; furthermore the error contribution from our POVM device and feedforward is only of O(10(-3)), less than some recent thresholds for fault-tolerant cluster computing.

  10. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  11. A review of setup error in supine breast radiotherapy using cone-beam computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales

    2016-10-01

    Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less

  12. The effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1973-01-01

    The analysis of ion data from retarding potential analyzers (RPA's) is generally done under the planar approximation, which assumes that the grid transparency is constant with angle of incidence and that all ions reaching the plane of the collectors are collected. These approximations are not valid for situations in which the ion thermal velocity is comparable to the vehicle velocity, causing ions to enter the RPA with high average transverse velocity. To investigate these effects, the current-voltage curves for H+ at 4000 K were calculated, taking into account the finite collector size and the variation of grid transparency with angle. These curves are then analyzed under the planar approximation. The results show that only small errors in temperature and density are introduced for an RPA with typical dimensions; and that even when the density error is substantial for non-typical dimensions, the temperature error remains minimal.

  13. Ground state properties of 3d metals from self-consistent GW approach

    DOE PAGES

    Kutepov, Andrey L.

    2017-10-06

    The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less

  14. Ground state properties of 3d metals from self-consistent GW approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, Andrey L.

    The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less

  15. Two-stage color palettization for error diffusion

    NASA Astrophysics Data System (ADS)

    Mitra, Niloy J.; Gupta, Maya R.

    2002-06-01

    Image-adaptive color palettization chooses a decreased number of colors to represent an image. Palettization is one way to decrease storage and memory requirements for low-end displays. Palettization is generally approached as a clustering problem, where one attempts to find the k palette colors that minimize the average distortion for all the colors in an image. This would be the optimal approach if the image was to be displayed with each pixel quantized to the closest palette color. However, to improve the image quality the palettization may be followed by error diffusion. In this work, we propose a two-stage palettization where the first stage finds some m << k clusters, and the second stage chooses palette points that cover the spread of each of the M clusters. After error diffusion, this method leads to better image quality at less computational cost and with faster display speed than full k-means palettization.

  16. The Estimation of Gestational Age at Birth in Database Studies.

    PubMed

    Eberg, Maria; Platt, Robert W; Filion, Kristian B

    2017-11-01

    Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.

  17. Robust sleep quality quantification method for a personal handheld device.

    PubMed

    Shin, Hangsik; Choi, Byunghun; Kim, Doyoon; Cho, Jaegeol

    2014-06-01

    The purpose of this study was to develop and validate a novel method for sleep quality quantification using personal handheld devices. The proposed method used 3- or 6-axes signals, including acceleration and angular velocity, obtained from built-in sensors in a smartphone and applied a real-time wavelet denoising technique to minimize the nonstationary noise. Sleep or wake status was decided on each axis, and the totals were finally summed to calculate sleep efficiency (SE), regarded as sleep quality in general. The sleep experiment was carried out for performance evaluation of the proposed method, and 14 subjects participated. An experimental protocol was designed for comparative analysis. The activity during sleep was recorded not only by the proposed method but also by well-known commercial applications simultaneously; moreover, activity was recorded on different mattresses and locations to verify the reliability in practical use. Every calculated SE was compared with the SE of a clinically certified medical device, the Philips (Amsterdam, The Netherlands) Actiwatch. In these experiments, the proposed method proved its reliability in quantifying sleep quality. Compared with the Actiwatch, accuracy and average bias error of SE calculated by the proposed method were 96.50% and -1.91%, respectively. The proposed method was vastly superior to other comparative applications with at least 11.41% in average accuracy and at least 6.10% in average bias; average accuracy and average absolute bias error of comparative applications were 76.33% and 17.52%, respectively.

  18. Quantifying predictability variations in a low-order ocean-atmosphere model - A dynamical systems approach

    NASA Technical Reports Server (NTRS)

    Nese, Jon M.; Dutton, John A.

    1993-01-01

    The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.

  19. Perceptions and Efficacy of Flight Operational Quality Assurance (FOQA) Programs Among Small-scale Operators

    DTIC Science & Technology

    2012-01-01

    regressive Integrated Moving Average ( ARIMA ) model for the data, eliminating the need to identify an appropriate model through trial and error alone...06 .11 13.67 16 .62 16 .14 .11 8.06 16 .95 * Based on the asymptotic chi-square approximation. 8 In general, ARIMA models address three...performance standards and measurement processes and a prevailing climate of organizational trust were important factors. Unfortunately, uneven

  20. Prevalence and pattern of prescription errors in a Nigerian kidney hospital.

    PubMed

    Babatunde, Kehinde M; Akinbodewa, Akinwumi A; Akinboye, Ayodele O; Adejumo, Ademola O

    2016-12-01

    To determine (i) the prevalence and pattern of prescription errors in our Centre and, (ii) appraise pharmacists' intervention and correction of identified prescription errors. A descriptive, single blinded cross-sectional study. Kidney Care Centre is a public Specialist hospital. The monthly patient load averages 60 General Out-patient cases and 17.4 in-patients. A total of 31 medical doctors (comprising of 2 Consultant Nephrologists, 15 Medical Officers, 14 House Officers), 40 nurses and 24 ward assistants participated in the study. One pharmacist runs the daily call schedule. Prescribers were blinded to the study. Prescriptions containing only galenicals were excluded. An error detection mechanism was set up to identify and correct prescription errors. Life-threatening prescriptions were discussed with the Quality Assurance Team of the Centre who conveyed such errors to the prescriber without revealing the on-going study. Prevalence of prescription errors, pattern of prescription errors, pharmacist's intervention. A total of 2,660 (75.0%) combined prescription errors were found to have one form of error or the other; illegitimacy 1,388 (52.18%), omission 1,221(45.90%), wrong dose 51(1.92%) and no error of style was detected. Life-threatening errors were low (1.1-2.2%). Errors were found more commonly among junior doctors and non-medical doctors. Only 56 (1.6%) of the errors were detected and corrected during the process of dispensing. Prescription errors related to illegitimacy and omissions were highly prevalent. There is a need to improve on patient-to-healthcare giver ratio. A medication quality assurance unit is needed in our hospitals. No financial support was received by any of the authors for this study.

  1. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kerber, A. G.; Sellers, P. J.

    1993-01-01

    Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.

  2. Body mass prediction from skeletal frame size in elite athletes.

    PubMed

    Ruff, C B

    2000-12-01

    Body mass can be estimated from measures of skeletal frame size (stature and bi-iliac (maximum pelvic) breadth) fairly accurately in modern human populations. However, it is not clear whether such a technique will lead to systematic biases in body mass estimation when applied to earlier hominins. Here the stature/bi-iliac method is tested, using data available for modern Olympic and Olympic-caliber athletes, with the rationale that these individuals may be more representative of the general physique and degree of physical conditioning characteristic of earlier populations. The average percent prediction error of body mass among both male and female athletes is less than 3%, with males slightly underestimated and females slightly overestimated. Among males, the ratio of shoulder to hip (biacromial/bi-iliac) breadth is correlated with prediction error, while lower limb/trunk length has only a weak inconsistent effect. In both sexes, athletes in "weight" events (e.g. , shot put, weight-lifting), which emphasize strength, are underestimated, while those in more endurance-related events (e.g., long distance running) are overestimated. It is likely that the environmental pressures facing earlier hominins would have favored more generalized physiques adapted for a combination of strength, speed, agility, and endurance. The events most closely approximating these requirements in Olympic athletes are the decathlon, pentathlon, and wrestling, all of which have average percent prediction errors of body mass of 5% or less. Thus, "morphometric" estimation of body mass from skeletal frame size appears to work reasonably well in both "normal" and highly athletic modern humans, increasing confidence that the technique will also be applicable to earlier hominins. Copyright 2000 Wiley-Liss, Inc.

  3. Spectroscopic and Interferometric Measurements of Nine K Giant Stars

    NASA Astrophysics Data System (ADS)

    Baines, Ellyn K.; Döllinger, Michaela P.; Guenther, Eike W.; Hatzes, Artie P.; Hrudkovu, Marie; van Belle, Gerard T.

    2016-09-01

    We present spectroscopic and interferometric measurements for a sample of nine K giant stars. These targets are of particular interest because they are slated for stellar oscillation observations. Our improved parameters will directly translate into reduced errors in the final masses for these stars when interferometric radii and asteroseismic densities are combined. Here, we determine each star’s limb-darkened angular diameter, physical radius, luminosity, bolometric flux, effective temperature, surface gravity, metallicity, and mass. When we compare our interferometric and spectroscopic results, we find no systematic offsets in the diameters and the values generally agree within the errors. Our interferometric temperatures for seven of the nine stars are hotter than those determined from spectroscopy with an average difference of about 380 K.

  4. Assumption-free estimation of the genetic contribution to refractive error across childhood.

    PubMed

    Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy

    2015-01-01

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.

  5. Force estimation from OCT volumes using 3D CNNs.

    PubMed

    Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander

    2018-07-01

    Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.

  6. Evolution of errors in the altimetric bathymetry model used by Google Earth and GEBCO

    NASA Astrophysics Data System (ADS)

    Marks, K. M.; Smith, W. H. F.; Sandwell, D. T.

    2010-09-01

    We analyze errors in the global bathymetry models of Smith and Sandwell that combine satellite altimetry with acoustic soundings and shorelines to estimate depths. Versions of these models have been incorporated into Google Earth and the General Bathymetric Chart of the Oceans (GEBCO). We use Japan Agency for Marine-Earth Science and Technology (JAMSTEC) multibeam surveys not previously incorporated into the models as "ground truth" to compare against model versions 7.2 through 12.1, defining vertical differences as "errors." Overall error statistics improve over time: 50th percentile errors declined from 57 to 55 to 49 m, and 90th percentile errors declined from 257 to 235 to 219 m, in versions 8.2, 11.1 and 12.1. This improvement is partly due to an increasing number of soundings incorporated into successive models, and partly to improvements in the satellite gravity model. Inspection of specific sites reveals that changes in the algorithms used to interpolate across survey gaps with altimetry have affected some errors. Versions 9.1 through 11.1 show a bias in the scaling from gravity in milliGals to topography in meters that affected the 15-160 km wavelength band. Regionally averaged (>160 km wavelength) depths have accumulated error over successive versions 9 through 11. These problems have been mitigated in version 12.1, which shows no systematic variation of errors with depth. Even so, version 12.1 is in some respects not as good as version 8.2, which employed a different algorithm.

  7. A state-based probabilistic model for tumor respiratory motion prediction

    NASA Astrophysics Data System (ADS)

    Kalet, Alan; Sandison, George; Wu, Huanmei; Schmitz, Ruth

    2010-12-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.

  8. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    PubMed

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  9. Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas

    USGS Publications Warehouse

    Puente, Celso

    1978-01-01

    The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.

  10. Reconstruction of regional mean temperature for East Asia since 1900s and its uncertainties

    NASA Astrophysics Data System (ADS)

    Hua, W.

    2017-12-01

    Regional average surface air temperature (SAT) is one of the key variables often used to investigate climate change. Unfortunately, because of the limited observations over East Asia, there were also some gaps in the observation data sampling for regional mean SAT analysis, which was important to estimate past climate change. In this study, the regional average temperature of East Asia since 1900s is calculated by the Empirical Orthogonal Function (EOF)-based optimal interpolation (OA) method with considering the data errors. The results show that our estimate is more precise and robust than the results from simple average, which provides a better way for past climate reconstruction. In addition to the reconstructed regional average SAT anomaly time series, we also estimated uncertainties of reconstruction. The root mean square error (RMSE) results show that the the error decreases with respect to time, and are not sufficiently large to alter the conclusions on the persist warming in East Asia during twenty-first century. Moreover, the test of influence of data error on reconstruction clearly shows the sensitivity of reconstruction to the size of the data error.

  11. Semi-empirical estimation of organic compound fugacity ratios at environmentally relevant system temperatures.

    PubMed

    van Noort, Paul C M

    2009-06-01

    Fugacity ratios of organic compounds are used to calculate (subcooled) liquid properties, such as solubility or vapour pressure, from solid properties and vice versa. They can be calculated from the entropy of fusion, the melting temperature, and heat capacity data for the solid and the liquid. For many organic compounds, values for the fusion entropy are lacking. Heat capacity data are even scarcer. In the present study, semi-empirical compound class specific equations were derived to estimate fugacity ratios from molecular weight and melting temperature for polycyclic aromatic hydrocarbons and polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans. These equations estimate fugacity ratios with an average standard error of about 0.05 log units. In addition, for compounds with known fusion entropy values, a general semi-empirical correction equation based on molecular weight and melting temperature was derived for estimation of the contribution of heat capacity differences to the fugacity ratio. This equation estimates the heat capacity contribution correction factor with an average standard error of 0.02 log units for polycyclic aromatic hydrocarbons, polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans.

  12. Analysis of the Magnitude and Frequency of Peak Discharge and Maximum Observed Peak Discharge in New Mexico and Surrounding Areas

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2008-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.

  13. WE-A-17A-03: Catheter Digitization in High-Dose-Rate Brachytherapy with the Assistance of An Electromagnetic (EM) Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, AL; Bhagwat, MS; Buzurovic, I

    Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less

  14. Local ensemble transform Kalman filter for ionospheric data assimilation: Observation influence analysis during a geomagnetic storm event

    NASA Astrophysics Data System (ADS)

    Durazo, Juan A.; Kostelich, Eric J.; Mahalov, Alex

    2017-09-01

    We propose a targeted observation strategy, based on the influence matrix diagnostic, that optimally selects where additional observations may be placed to improve ionospheric forecasts. This strategy is applied in data assimilation observing system experiments, where synthetic electron density vertical profiles, which represent those of Constellation Observing System for Meteorology, Ionosphere, and Climate/Formosa satellite 3, are assimilated into the Thermosphere-Ionosphere-Electrodynamics General Circulation Model using the local ensemble transform Kalman filter during the 26 September 2011 geomagnetic storm. During each analysis step, the observation vector is augmented with five synthetic vertical profiles optimally placed to target electron density errors, using our targeted observation strategy. Forecast improvement due to assimilation of augmented vertical profiles is measured with the root-mean-square error (RMSE) of analyzed electron density, averaged over 600 km regions centered around the augmented vertical profile locations. Assimilating vertical profiles with targeted locations yields about 60%-80% reduction in electron density RMSE, compared to a 15% average reduction when assimilating randomly placed vertical profiles. Assimilating vertical profiles whose locations target the zonal component of neutral winds (Un) yields on average a 25% RMSE reduction in Un estimates, compared to a 2% average improvement obtained with randomly placed vertical profiles. These results demonstrate that our targeted strategy can improve data assimilation efforts during extreme events by detecting regions where additional observations would provide the largest benefit to the forecast.

  15. Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.

    PubMed

    Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo

    2014-05-19

    Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.

  16. Spectral risk measures: the risk quadrangle and optimal approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouri, Drew P.

    We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. Lastly, we prove the consistency of this approximation and demonstrate our results through numerical examples.

  17. Spectral risk measures: the risk quadrangle and optimal approximation

    DOE PAGES

    Kouri, Drew P.

    2018-05-24

    We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. Lastly, we prove the consistency of this approximation and demonstrate our results through numerical examples.

  18. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  19. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Calibration system for radon EEC measurements.

    PubMed

    Mostafa, Y A M; Vasyanovich, M; Zhukovsky, M; Zaitceva, N

    2015-06-01

    The measurement of radon equivalent equilibrium concentration (EECRn) is very simple and quick technique for the estimation of radon progeny level in dwellings or working places. The most typical methods of EECRn measurements are alpha radiometry or alpha spectrometry. In such technique, the influence of alpha particle absorption in filters and filter effectiveness should be taken into account. In the authors' work, it is demonstrated that more precise and less complicated calibration of EECRn-measuring equipment can be conducted by the use of the gamma spectrometer as a reference measuring device. It was demonstrated that for this calibration technique systematic error does not exceed 3 %. The random error of (214)Bi activity measurements is in the range 3-6 %. In general, both these errors can be decreased. The measurements of EECRn by gamma spectrometry and improved alpha radiometry are in good agreement, but the systematic shift between average values can be observed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2012-01-01

    The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.

  2. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  3. Prevalence of medication errors in primary health care at Bahrain Defence Force Hospital – prescription-based study

    PubMed Central

    Aljasmi, Fatema; Almalood, Fatema

    2018-01-01

    Background One of the important activities that physicians – particularly general practitioners – perform is prescribing. It occurs in most health care facilities and especially in primary health care (PHC) settings. Objectives This study aims to determine what types of prescribing errors are made in PHC at Bahrain Defence Force (BDF) Hospital, and how common they are. Methods This was a retrospective study of data from PHC at BDF Hospital. The data consisted of 379 prescriptions randomly selected from the pharmacy between March and May 2013, and errors in the prescriptions were classified into five types: major omission, minor omission, commission, integration, and skill-related errors. Results Of the total prescriptions, 54.4% (N=206) were given to male patients and 45.6% (N=173) to female patients; 24.8% were given to patients under the age of 10 years. On average, there were 2.6 drugs per prescription. In the prescriptions, 8.7% of drugs were prescribed by their generic names, and 28% (N=106) of prescriptions included an antibiotic. Out of the 379 prescriptions, 228 had an error, and 44.3% (N=439) of the 992 prescribed drugs contained errors. The proportions of errors were as follows: 9.9% (N=38) were minor omission errors; 73.6% (N=323) were major omission errors; 9.3% (N=41) were commission errors; and 17.1% (N=75) were skill-related errors. Conclusion This study provides awareness of the presence of prescription errors and frequency of the different types of errors that exist in this hospital. Understanding the different types of errors could help future studies explore the causes of specific errors and develop interventions to reduce them. Further research should be conducted to understand the causes of these errors and demonstrate whether the introduction of electronic prescriptions has an effect on patient outcomes. PMID:29445304

  4. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  5. Deposition of aerially applied BT in an oak forest and its prediction with the FSCBG model

    USGS Publications Warehouse

    Anderson, Dean E.; Miller, David R.; Wang, Yansen; Yendol, William G.; Mierzejewski, Karl; McManus, Michael L.

    1992-01-01

    Data are provided from 17 single-swath aerial spray trials that were conducted over a fully leafed, 16-m tall, mixed oak forest. The distribution of cross-swath spray deposits was sampled at the top of the canopy and below the canopy. Micrometeorological conditions were measured above and within the canopy during the spray trials. The USDA Forest Service FSCBG (Forest Service-Cramer-Barry-Grim) model was run to predict the target sampler catch for each trial using forest stand, airplane-application-equipment configuration, and micrometeorological conditions as inputs. Observations showed an average cross-swath deposition of 100 IU cm−2 with large run-to-run variability in deposition patterns, magnitudes, and drift. Eleven percent of the spray material that reached the top of the canopy penetrated through the tree canopy to the forest floor.The FSCBG predictions of the ensemble-averaged deposition were within 17% of the measured deposition at the canopy top and within 8% on the ground beneath the canopy. Run-to-run deposit predictions by FSCBG were considerably less variable than the measured deposits. Individual run predictions were much less accurate than the ensemble-averaged predictions as demonstrated by an average root-mean-square-error (rmse) of 27.9 IU CM−2 at the top of the canopy. Comparisons of the differences between predicted and observed deposits indicated that the model accuracy was sensitive to atmospheric stability conditions. In neutral and stable conditions, a regular pattern of error was indicated by overprediction of the canopy-top deposit at distances from 0 to 20 m downwind from the flight line and underprediction of the deposit both farther downwind than 20 m and upwind of the flight line. In unstable conditions the model generally underpredicted the deposit downwind from the flight line, but showed no regular pattern of error.

  6. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators.

    PubMed

    Belley, Matthew D; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J; Chen, Benny J; Dewhirst, Mark W; Yoshizumi, Terry T

    2014-03-01

    Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Average doses in soft-tissue organs were found to vary by as much as 23%-32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.

  7. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    PubMed Central

    Belley, Matthew D.; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J.; Chen, Benny J.; Dewhirst, Mark W.; Yoshizumi, Terry T.

    2014-01-01

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigning a single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs. PMID:24593746

  8. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belley, Matthew D.; Wang, Chu; Nguyen, Giao

    2014-03-15

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application formore » tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.« less

  9. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  10. Liability claims and costs before and after implementation of a medical error disclosure program.

    PubMed

    Kachalia, Allen; Kaufman, Samuel R; Boothman, Richard; Anderson, Susan; Welch, Kathleen; Saint, Sanjay; Rogers, Mary A M

    2010-08-17

    Since 2001, the University of Michigan Health System (UMHS) has fully disclosed and offered compensation to patients for medical errors. To compare liability claims and costs before and after implementation of the UMHS disclosure-with-offer program. Retrospective before-after analysis from 1995 to 2007. Public academic medical center and health system. Inpatients and outpatients involved in claims made to UMHS. Number of new claims for compensation, number of claims compensated, time to claim resolution, and claims-related costs. After full implementation of a disclosure-with-offer program, the average monthly rate of new claims decreased from 7.03 to 4.52 per 100,000 patient encounters (rate ratio [RR], 0.64 [95% CI, 0.44 to 0.95]). The average monthly rate of lawsuits decreased from 2.13 to 0.75 per 100,000 patient encounters (RR, 0.35 [CI, 0.22 to 0.58]). Median time from claim reporting to resolution decreased from 1.36 to 0.95 years. Average monthly cost rates decreased for total liability (RR, 0.41 [CI, 0.26 to 0.66]), patient compensation (RR, 0.41 [CI, 0.26 to 0.67]), and non-compensation-related legal costs (RR, 0.39 [CI, 0.22 to 0.67]). The study design cannot establish causality. Malpractice claims generally declined in Michigan during the latter part of the study period. The findings might not apply to other health systems, given that UMHS has a closed staff model covered by a captive insurance company and often assumes legal responsibility. The UMHS implemented a program of full disclosure of medical errors with offers of compensation without increasing its total claims and liability costs. Blue Cross Blue Shield of Michigan Foundation.

  11. A field evaluation of a piezo-optical dosimeter for environmental monitoring of nitrogen dioxide.

    PubMed

    Wright, John D; Schillinger, Eric F J; Cazier, Fabrice; Nouali, Habiba; Mercier, Agnes; Beaugard, Charles

    2004-06-01

    Measurements of 8-hour time-weighted average NO(2) concentrations are reported at 7 different locations in the region of Dunkirk over 5 consecutive days using PiezOptic monitoring badges previously calibrated for the range 0-70 ppb together with data from chemiluminescent analysers in 5 sites (4 fixed and one mobile). The latter facilities also provided data on ozone and NO concentrations and meteorological conditions. Daily averages from the two pairs of badges in different types of sampling cover in each site have been compared with data from the chemiluminescent analysers, and found largely to agree within error margins of +/-30%. Although NO(2) and ozone concentrations were low, rendering detailed discussion impossible, the general features followed expected patterns.

  12. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.

  13. Evaluation of monthly rainfall estimates derived from the special sensor microwave/imager (SSM/I) over the tropical Pacific

    NASA Technical Reports Server (NTRS)

    Berg, Wesley; Avery, Susan K.

    1995-01-01

    Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the special sensor microwave/imager (SSM/I) for the period from July 1987 through December 1990. These monthly estimates are calibrated using data from a network of Pacific atoll rain gauges in order to account for systematic biases and are then compared with several visible and infrared satellite-based rainfall estimation techniques for the purpose of evaluating the performance of the microwave-based estimates. Although several key differences among the various techniques are observed, the general features of the monthly rainfall time series agree very well. Finally, the significant error sources contributing to uncertainties in the monthly estimates are examined and an estimate of the total error is produced. The sampling error characteristics are investigated using data from two SSM/I sensors and a detailed analysis of the characteristics of the diurnal cycle of rainfall over the oceans and its contribution to sampling errors in the monthly SSM/I estimates is made using geosynchronous satellite data. Based on the analysis of the sampling and other error sources the total error was estimated to be of the order of 30 to 50% of the monthly rainfall for estimates averaged over 2.5 deg x 2.5 deg latitude/longitude boxes, with a contribution due to diurnal variability of the order of 10%.

  14. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  15. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  16. Class-specific Error Bounds for Ensemble Classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prenger, R; Lemmond, T; Varshney, K

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less

  17. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  18. [Comparison of predictive effect between the single auto regressive integrated moving average (ARIMA) model and the ARIMA-generalized regression neural network (GRNN) combination model on the incidence of scarlet fever].

    PubMed

    Zhu, Yu; Xia, Jie-lai; Wang, Jing

    2009-09-01

    Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.

  19. Average capacity of the ground to train communication link of a curved track in the turbulence of gamma-gamma distribution

    NASA Astrophysics Data System (ADS)

    Yang, Yanqiu; Yu, Lin; Zhang, Yixin

    2017-04-01

    A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.

  20. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  1. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  2. The calculation of average error probability in a digital fibre optical communication system

    NASA Astrophysics Data System (ADS)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  3. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  4. Prevalence and cost of hospital medical errors in the general and elderly United States populations.

    PubMed

    Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S

    2013-12-01

    The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.

  5. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    PubMed

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.

  6. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model

    PubMed Central

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher

    2015-01-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  7. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  8. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  9. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  10. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  11. CCSDT calculations of molecular equilibrium geometries

    NASA Astrophysics Data System (ADS)

    Halkier, Asger; Jørgensen, Poul; Gauss, Jürgen; Helgaker, Trygve

    1997-08-01

    CCSDT equilibrium geometries of CO, CH 2, F 2, HF, H 2O and N 2 have been calculated using the correlation-consistent cc-pVXZ basis sets. Similar calculations have been performed for SCF, CCSD and CCSD(T). In general, bond lengths decrease when improving the basis set and increase when improving the N-electron treatment. CCSD(T) provides an excellent approximation to CCSDT for bond lengths as the largest difference between CCSDT and CCSD(T) is 0.06 pm. At the CCSDT/cc-pVQZ level, basis set deficiencies, neglect of higher-order excitations, and incomplete treatment of core-correlation all give rise to errors of a few tenths of a pm, but to a large extent, these errors cancel. The CCSDT/cc-pVQZ bond lengths deviate on average only by 0.11 pm from experiment.

  12. Updated techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada

    USGS Publications Warehouse

    Hess, Glen W.

    2002-01-01

    Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.

  13. Neural activity during affect labeling predicts expressive writing effects on well-being: GLM and SVM approaches

    PubMed Central

    Memarian, Negar; Torre, Jared B.; Haltom, Kate E.; Stanton, Annette L.

    2017-01-01

    Abstract Affect labeling (putting feelings into words) is a form of incidental emotion regulation that could underpin some benefits of expressive writing (i.e. writing about negative experiences). Here, we show that neural responses during affect labeling predicted changes in psychological and physical well-being outcome measures 3 months later. Furthermore, neural activity of specific frontal regions and amygdala predicted those outcomes as a function of expressive writing. Using supervised learning (support vector machines regression), improvements in four measures of psychological and physical health (physical symptoms, depression, anxiety and life satisfaction) after an expressive writing intervention were predicted with an average of 0.85% prediction error [root mean square error (RMSE) %]. The predictions were significantly more accurate with machine learning than with the conventional generalized linear model method (average RMSE: 1.3%). Consistent with affect labeling research, right ventrolateral prefrontal cortex (RVLPFC) and amygdalae were top predictors of improvement in the four outcomes. Moreover, RVLPFC and left amygdala predicted benefits due to expressive writing in satisfaction with life and depression outcome measures, respectively. This study demonstrates the substantial merit of supervised machine learning for real-world outcome prediction in social and affective neuroscience. PMID:28992270

  14. Computational and analytical comparison of flux discretizations for the semiconductor device equations beyond Boltzmann statistics

    NASA Astrophysics Data System (ADS)

    Farrell, Patricio; Koprucki, Thomas; Fuhrmann, Jürgen

    2017-10-01

    We compare three thermodynamically consistent numerical fluxes known in the literature, appearing in a Voronoï finite volume discretization of the van Roosbroeck system with general charge carrier statistics. Our discussion includes an extension of the Scharfetter-Gummel scheme to non-Boltzmann (e.g. Fermi-Dirac) statistics. It is based on the analytical solution of a two-point boundary value problem obtained by projecting the continuous differential equation onto the interval between neighboring collocation points. Hence, it serves as a reference flux. The exact solution of the boundary value problem can be approximated by computationally cheaper fluxes which modify certain physical quantities. One alternative scheme averages the nonlinear diffusion (caused by the non-Boltzmann nature of the problem), another one modifies the effective density of states. To study the differences between these three schemes, we analyze the Taylor expansions, derive an error estimate, visualize the flux error and show how the schemes perform for a carefully designed p-i-n benchmark simulation. We present strong evidence that the flux discretization based on averaging the nonlinear diffusion has an edge over the scheme based on modifying the effective density of states.

  15. Estimation of flood discharges at selected annual exceedance probabilities for unregulated, rural streams in Vermont, with a section on Vermont regional skew regression

    USGS Publications Warehouse

    Olson, Scott A.; with a section by Veilleux, Andrea G.

    2014-01-01

    This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  16. A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy

    PubMed Central

    Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.

    2011-01-01

    Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125

  17. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    PubMed

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.

  18. Application of data assimilation methods for analysis and integration of observed and modeled Arctic Sea ice motions

    NASA Astrophysics Data System (ADS)

    Meier, Walter Neil

    This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.

  19. Measurements of stem diameter: implications for individual- and stand-level errors.

    PubMed

    Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D

    2017-08-01

    Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when monitoring relatively small changes in permanent sample plots (e.g. National Forest Inventories), noting that care is required in irregular-shaped, large-single-stemmed individuals, and (ii) use of a SDG to maximise efficiency when using inventory methods to assess basal area, and hence biomass or wood volume, at the stand scale (i.e. in studies of impacts of management or site quality) where there are budgetary constraints, noting the importance of sufficient sample sizes to ensure that the population sampled represents the true population.

  20. Improving laboratory data entry quality using Six Sigma.

    PubMed

    Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks

    2013-01-01

    The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.

  1. MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Followill, D; Howell, R

    Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less

  2. Quantifying errors in surface ozone predictions associated with clouds over the CONUS: a WRF-Chem modeling study using satellite cloud retrievals

    NASA Astrophysics Data System (ADS)

    Ryu, Young-Hee; Hodzic, Alma; Barre, Jerome; Descombes, Gael; Minnis, Patrick

    2018-05-01

    Clouds play a key role in radiation and hence O3 photochemistry by modulating photolysis rates and light-dependent emissions of biogenic volatile organic compounds (BVOCs). It is not well known, however, how much error in O3 predictions can be directly attributed to error in cloud predictions. This study applies the Weather Research and Forecasting with Chemistry (WRF-Chem) model at 12 km horizontal resolution with the Morrison microphysics and Grell 3-D cumulus parameterization to quantify uncertainties in summertime surface O3 predictions associated with cloudiness over the contiguous United States (CONUS). All model simulations are driven by reanalysis of atmospheric data and reinitialized every 2 days. In sensitivity simulations, cloud fields used for photochemistry are corrected based on satellite cloud retrievals. The results show that WRF-Chem predicts about 55 % of clouds in the right locations and generally underpredicts cloud optical depths. These errors in cloud predictions can lead to up to 60 ppb of overestimation in hourly surface O3 concentrations on some days. The average difference in summertime surface O3 concentrations derived from the modeled clouds and satellite clouds ranges from 1 to 5 ppb for maximum daily 8 h average O3 (MDA8 O3) over the CONUS. This represents up to ˜ 40 % of the total MDA8 O3 bias under cloudy conditions in the tested model version. Surface O3 concentrations are sensitive to cloud errors mainly through the calculation of photolysis rates (for ˜ 80 %), and to a lesser extent to light-dependent BVOC emissions. The sensitivity of surface O3 concentrations to satellite-based cloud corrections is about 2 times larger in VOC-limited than NOx-limited regimes. Our results suggest that the benefits of accurate predictions of cloudiness would be significant in VOC-limited regions, which are typical of urban areas.

  3. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  4. A Caveat Note on Tuning in the Development of Coupled Climate Models

    NASA Astrophysics Data System (ADS)

    Dommenget, Dietmar; Rezny, Michael

    2018-01-01

    State-of-the-art coupled general circulation models (CGCMs) have substantial errors in their simulations of climate. In particular, these errors can lead to large uncertainties in the simulated climate response (both globally and regionally) to a doubling of CO2. Currently, tuning of the parameterization schemes in CGCMs is a significant part of the developed. It is not clear whether such tuning actually improves models. The tuning process is (in general) neither documented, nor reproducible. Alternative methods such as flux correcting are not used nor is it clear if such methods would perform better. In this study, ensembles of perturbed physics experiments are performed with the Globally Resolved Energy Balance (GREB) model to test the impact of tuning. The work illustrates that tuning has, in average, limited skill given the complexity of the system, the limited computing resources, and the limited observations to optimize parameters. While tuning may improve model performance (such as reproducing observed past climate), it will not get closer to the "true" physics nor will it significantly improve future climate change projections. Tuning will introduce artificial compensating error interactions between submodels that will hamper further model development. In turn, flux corrections do perform well in most, but not all aspects. A main advantage of flux correction is that it is much cheaper, simpler, more transparent, and it does not introduce artificial error interactions between submodels. These GREB model experiments should be considered as a pilot study to motivate further CGCM studies that address the issues of model tuning.

  5. Performance of Goddard Earth Observing System GCM Column Radiation Models under Heterogeneous Cloud Conditions

    NASA Technical Reports Server (NTRS)

    Oreopoulos, L.; Chou, M.-D.; Khairoutdinov, M.; Barker, H. W.; Cahalan, R. F.

    2003-01-01

    We test the performance of the shortwave (SW) and longwave (LW) Column Radiation Models (CORAMs) of Chou and collaborators with heterogeneous cloud fields from a global single-day dataset produced by NCAR's Community Atmospheric Model with a 2-D CRM installed in each gridbox. The original SW version of the CORAM performs quite well compared to reference Independent Column Approximation (ICA) calculations for boundary fluxes, largely due to the success of a combined overlap and cloud scaling parameterization scheme. The absolute magnitude of errors relative to ICA are even smaller for the LW CORAM which applies similar overlap. The vertical distribution of heating and cooling within the atmosphere is also simulated quite well with daily-averaged zonal errors always below 0.3 K/d for SW heating rates and 0.6 K/d for LW cooling rates. The SW CORAM's performance improves by introducing a scheme that accounts for cloud inhomogeneity. These results suggest that previous studies demonstrating the inaccuracy of plane-parallel models may have unfairly focused on worst scenario cases, and that current radiative transfer algorithms of General Circulation Models (GCMs) may be more capable than previously thought in estimating realistic spatial and temporal averages of radiative fluxes, as long as they are provided with correct mean cloud profiles. However, even if the errors of the particular CORAMs are small, they seem to be systematic, and the impact of the biases can be fully assessed only with GCM climate simulations.

  6. Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process

    PubMed Central

    Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A.; Cabanes, Itziar; Pombo, Iñigo

    2014-01-01

    Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations. PMID:24854055

  7. The contribution of natural variability to GCM bias: Can we effectively bias-correct climate projections?

    NASA Astrophysics Data System (ADS)

    McAfee, S. A.; DeLaFrance, A.

    2017-12-01

    Investigating the impacts of climate change often entails using projections from inherently imperfect general circulation models (GCMs) to drive models that simulate biophysical or societal systems in great detail. Error or bias in the GCM output is often assessed in relation to observations, and the projections are adjusted so that the output from impacts models can be compared to historical or observed conditions. Uncertainty in the projections is typically accommodated by running more than one future climate trajectory to account for differing emissions scenarios, model simulations, and natural variability. The current methods for dealing with error and uncertainty treat them as separate problems. In places where observed and/or simulated natural variability is large, however, it may not be possible to identify a consistent degree of bias in mean climate, blurring the lines between model error and projection uncertainty. Here we demonstrate substantial instability in mean monthly temperature bias across a suite of GCMs used in CMIP5. This instability is greatest in the highest latitudes during the cool season, where shifts from average temperatures below to above freezing could have profound impacts. In models with the greatest degree of bias instability, the timing of regional shifts from below to above average normal temperatures in a single climate projection can vary by about three decades, depending solely on the degree of bias assessed. This suggests that current bias correction methods based on comparison to 20- or 30-year normals may be inappropriate, particularly in the polar regions.

  8. Atmospheric fossil fuel CO2 traced by 14CO2 and air quality index pollutant observations in Beijing and Xiamen, China.

    PubMed

    Niu, Zhenchuan; Zhou, Weijian; Feng, Xue; Feng, Tian; Wu, Shugang; Cheng, Peng; Lu, Xuefeng; Du, Hua; Xiong, Xiaohu; Fu, Yunchong

    2018-06-01

    Radiocarbon ( 14 C) is the most accurate tracer available for quantifying atmospheric CO 2 derived from fossil fuel (CO 2ff ), but it is expensive and time-consuming to measure. Here, we used common hourly Air Quality Index (AQI) pollutants (AQI, PM 2.5 , PM 10 , and CO) to indirectly trace diurnal CO 2ff variations during certain days at the urban sites in Beijing and Xiamen, China, based on linear relationships between AQI pollutants and CO 2ff traced by 14 C ([Formula: see text]) for semimonthly samples obtained in 2014. We validated these indirectly traced CO 2ff (CO 2ff-in ) concentrations against [Formula: see text] concentrations traced by simultaneous diurnal 14 CO 2 observations. Significant (p < 0.05) strong correlations were observed between each of the separate AQI pollutants and [Formula: see text] for the semimonthly samples. Diurnal variations in CO 2ff traced by each of the AQI pollutants generally showed similar trends to those of [Formula: see text], with high agreement at the sampling site in Beijing and relatively poor agreement at the sampling site in Xiamen. AQI pollutant tracers showed high normalized root-mean-square (NRMS) errors for the summer diurnal samples due to low [Formula: see text] concentrations. After the removal of these summer samples, the NRMS errors for AQI pollutant tracers were in the range of 31.6-64.2%. CO generally showed a high agreement and low NRMS errors among these indirect tracers. Based on these linear relationships, monthly CO 2ff averages at the sampling sites in Beijing and Xiamen were traced using CO concentration as a tracer. The monthly CO 2ff averages at the Beijing site showed a shallow U-type variation. These results indicate that CO can be used to trace CO 2ff variations in Chinese cities with CO 2ff concentrations above 5 ppm.

  9. Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Martin, Gary R.

    2003-01-01

    This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.

  10. An assessment of air pollutant exposure methods in Mexico City, Mexico.

    PubMed

    Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S

    2015-05-01

    Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.

  11. Average capacity optimization in free-space optical communication system over atmospheric turbulence channels with pointing errors.

    PubMed

    Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui

    2010-10-01

    A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.

  12. Revised techniques for estimating peak discharges from channel width in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.; Omang, R.J.

    1987-01-01

    This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)

  13. Two-dimensional radial laser scanning for circular marker detection and external mobile robot tracking.

    PubMed

    Teixidó, Mercè; Pallejà, Tomàs; Font, Davinia; Tresanchez, Marcel; Moreno, Javier; Palacín, Jordi

    2012-11-28

    This paper presents the use of an external fixed two-dimensional laser scanner to detect cylindrical targets attached to moving devices, such as a mobile robot. This proposal is based on the detection of circular markers in the raw data provided by the laser scanner by applying an algorithm for outlier avoidance and a least-squares circular fitting. Some experiments have been developed to empirically validate the proposal with different cylindrical targets in order to estimate the location and tracking errors achieved, which are generally less than 20 mm in the area covered by the laser sensor. As a result of the validation experiments, several error maps have been obtained in order to give an estimate of the uncertainty of any location computed. This proposal has been validated with a medium-sized mobile robot with an attached cylindrical target (diameter 200 mm). The trajectory of the mobile robot was estimated with an average location error of less than 15 mm, and the real location error in each individual circular fitting was similar to the error estimated with the obtained error maps. The radial area covered in this validation experiment was up to 10 m, a value that depends on the radius of the cylindrical target and the radial density of the distance range points provided by the laser scanner but this area can be increased by combining the information of additional external laser scanners.

  14. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  15. Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches

    NASA Technical Reports Server (NTRS)

    Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.

  16. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Assessment of ecologic regression in the study of lung cancer and indoor radon.

    PubMed

    Stidley, C A; Samet, J M

    1994-02-01

    Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.

  18. Pricing and hedging derivative securities with neural networks: Bayesian regularization, early stopping, and bagging.

    PubMed

    Gençay, R; Qi, M

    2001-01-01

    We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and delta-hedging errors than the baseline neural-network (NN) model and the Black-Scholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error (HE) in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and delta hedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the average HE of bagging is far less than that of the baseline model in five out of six years. We conclude that they be used at least in cases when no appropriate hints are available.

  19. Preparing a neuropediatric upper limb exergame rehabilitation system for home-use: a feasibility study.

    PubMed

    Gerber, Corinna N; Kunz, Bettina; van Hedel, Hubertus J A

    2016-03-23

    Home-based, computer-enhanced therapy of hand and arm function can complement conventional interventions and increase the amount and intensity of training, without interfering too much with family routines. The objective of the present study was to investigate the feasibility and usability of the new portable version of the YouGrabber® system (YouRehab AG, Zurich, Switzerland) in the home setting. Fifteen families of children (7 girls, mean age: 11.3y) with neuromotor disorders and affected upper limbs participated. They received instructions and took the system home to train for 2 weeks. After returning it, they answered questions about usability, motivation, and their general opinion of the system (Visual Analogue Scale; 0 indicating worst score, 100 indicating best score; ≤30 not satisfied, 31-69 average, ≥70 satisfied). Furthermore, total pure playtime and number of training sessions were quantified. To prove the usability of the system, number and sort of support requests were logged. The usability of the system was considered average to satisfying (mean 60.1-93.1). The lowest score was given for the occurrence of technical errors. Parents had to motivate their children to start (mean 66.5) and continue (mean 68.5) with the training. But in general, parents estimated the therapeutic benefit as high (mean 73.1) and the whole system as very good (mean 87.4). Children played on average 7 times during the 2 weeks; total pure playtime was 185 ± 45 min. Especially at the beginning of the trial, systems were very error-prone. Fortunately, we, or the company, solved most problems before the patients took the systems home. Nevertheless, 10 of 15 families contacted us at least once because of technical problems. Despite that the YouGrabber® is a promising and highly accepted training tool for home-use, currently, it is still error-prone, and the requested support exceeds the support that can be provided by clinical therapists. A technically more robust system, combined with additional attractive games, likely results in higher patient motivation and better compliance. This would reduce the need for parents to motivate their children extrinsically and allow for clinical trials to investigate the effectiveness of the system. ClinicalTrials.gov NCT02368223.

  20. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  1. Comparison of community and hospital pharmacists' attitudes and behaviors on medication error disclosure to the patient: A pilot study.

    PubMed

    Kim, ChungYun; Mazan, Jennifer L; Quiñones-Boex, Ana C

    To determine pharmacists' attitudes and behaviors on medication errors and their disclosure and to compare community and hospital pharmacists on such views. An online questionnaire was developed from previous studies on physicians' disclosure of errors. Questionnaire items included demographics, environment, personal experiences, and attitudes on medication errors and the disclosure process. An invitation to participate along with the link to the questionnaire was electronically distributed to members of two Illinois pharmacy associations. A follow-up reminder was sent 4 weeks after the original message. Data were collected for 3 months, and statistical analyses were performed with the use of IBM SPSS version 22.0. The overall response rate was 23.3% (n = 422). The average employed respondent was a 51-year-old white woman with a BS Pharmacy degree working in a hospital pharmacy as a clinical staff member. Regardless of practice settings, pharmacist respondents agreed that medication errors were inevitable and that a disclosure process is necessary. Respondents from community and hospital settings were further analyzed to assess any differences. Community pharmacist respondents were more likely to agree that medication errors were inevitable and that pharmacists should address the patient's emotions when disclosing an error. Community pharmacist respondents were also more likely to agree that the health care professional most closely involved with the error should disclose the error to the patient and thought that it was the pharmacists' responsibility to disclose the error. Hospital pharmacist respondents were more likely to agree that it was important to include all details in a disclosure process and more likely to disagree on putting a "positive spin" on the event. Regardless of practice setting, responding pharmacists generally agreed that errors should be disclosed to patients. There were, however, significant differences in their attitudes and behaviors depending on their particular practice setting. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  2. Passport Officers’ Errors in Face Matching

    PubMed Central

    White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682

  3. Passport officers' errors in face matching.

    PubMed

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  4. Polychromatic wave-optics models for image-plane speckle. 2. Unresolved objects.

    PubMed

    Van Zandt, Noah R; Spencer, Mark F; Steinbock, Michael J; Anderson, Brian M; Hyde, Milo W; Fiorino, Steven T

    2018-05-20

    Polychromatic laser light can reduce speckle noise in many wavefront-sensing and imaging applications. To help quantify the achievable reduction in speckle noise, this study investigates the accuracy of three polychromatic wave-optics models under the specific conditions of an unresolved object. Because existing theory assumes a well-resolved object, laboratory experiments are used to evaluate model accuracy. The three models use Monte-Carlo averaging, depth slicing, and spectral slicing, respectively, to simulate the laser-object interaction. The experiments involve spoiling the temporal coherence of laser light via a fiber-based, electro-optic modulator. After the light scatters off of the rough object, speckle statistics are measured. The Monte-Carlo method is found to be highly inaccurate, while depth-slicing error peaks at 7.8% but is generally much lower in comparison. The spectral-slicing method is the most accurate, always producing results within the error bounds of the experiment.

  5. Wavelet regression model in forecasting crude oil price

    NASA Astrophysics Data System (ADS)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  6. Assessing Auditory Discrimination Skill of Malay Children Using Computer-based Method.

    PubMed

    Ting, H; Yunus, J; Mohd Nordin, M Z

    2005-01-01

    The purpose of this paper is to investigate the auditory discrimination skill of Malay children using computer-based method. Currently, most of the auditory discrimination assessments are conducted manually by Speech-Language Pathologist. These conventional tests are actually general tests of sound discrimination, which do not reflect the client's specific speech sound errors. Thus, we propose computer-based Malay auditory discrimination test to automate the whole process of assessment as well as to customize the test according to the specific speech error sounds of the client. The ability in discriminating voiced and unvoiced Malay speech sounds was studied for the Malay children aged between 7 and 10 years old. The study showed no major difficulty for the children in discriminating the Malay speech sounds except differentiating /g/-/k/ sounds. Averagely the children of 7 years old failed to discriminate /g/-/k/ sounds.

  7. Global Surface Temperature Change and Uncertainties Since 1861

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.

  8. Errors and improvements in the use of archived meteorological data for chemical transport modeling: an analysis using GEOS-Chem v11-01 driven by GEOS-5 meteorology

    NASA Astrophysics Data System (ADS)

    Yu, Karen; Keller, Christoph A.; Jacob, Daniel J.; Molod, Andrea M.; Eastham, Sebastian D.; Long, Michael S.

    2018-01-01

    Global simulations of atmospheric chemistry are commonly conducted with off-line chemical transport models (CTMs) driven by archived meteorological data from general circulation models (GCMs). The off-line approach has the advantages of simplicity and expediency, but it incurs errors due to temporal averaging in the meteorological archive and the inability to reproduce the GCM transport algorithms exactly. The CTM simulation is also often conducted at coarser grid resolution than the parent GCM. Here we investigate this cascade of CTM errors by using 222Rn-210Pb-7Be chemical tracer simulations off-line in the GEOS-Chem CTM at rectilinear 0.25° × 0.3125° (≈ 25 km) and 2° × 2.5° (≈ 200 km) resolutions and online in the parent GEOS-5 GCM at cubed-sphere c360 (≈ 25 km) and c48 (≈ 200 km) horizontal resolutions. The c360 GEOS-5 GCM meteorological archive, updated every 3 h and remapped to 0.25° × 0.3125°, is the standard operational product generated by the NASA Global Modeling and Assimilation Office (GMAO) and used as input by GEOS-Chem. We find that the GEOS-Chem 222Rn simulation at native 0.25° × 0.3125° resolution is affected by vertical transport errors of up to 20 % relative to the GEOS-5 c360 online simulation, in part due to loss of transient organized vertical motions in the GCM (resolved convection) that are temporally averaged out in the 3 h meteorological archive. There is also significant error caused by operational remapping of the meteorological archive from a cubed-sphere to a rectilinear grid. Decreasing the GEOS-Chem resolution from 0.25° × 0.3125° to 2° × 2.5° induces further weakening of vertical transport as transient vertical motions are averaged out spatially and temporally. The resulting 222Rn concentrations simulated by the coarse-resolution GEOS-Chem are overestimated by up to 40 % in surface air relative to the online c360 simulations and underestimated by up to 40 % in the upper troposphere, while the tropospheric lifetimes of 210Pb and 7Be against aerosol deposition are affected by 5-10 %. The lost vertical transport in the coarse-resolution GEOS-Chem simulation can be partly restored by recomputing the convective mass fluxes at the appropriate resolution to replace the archived convective mass fluxes and by correcting for bias in the spatial averaging of boundary layer mixing depths.

  9. Parameterisation of rainfall-runoff models for forecasting low and average flows, I: Conceptual modelling

    NASA Astrophysics Data System (ADS)

    Castiglioni, S.; Toth, E.

    2009-04-01

    In the calibration procedure of continuously-simulating models, the hydrologist has to choose which part of the observed hydrograph is most important to fit, either implicitly, through the visual agreement in manual calibration, or explicitly, through the choice of the objective function(s). Changing the objective functions it is in fact possible to emphasise different kind of errors, giving them more weight in the calibration phase. The objective functions used for calibrating hydrological models are generally of the quadratic type (mean squared error, correlation coefficient, coefficient of determination, etc) and are therefore oversensitive to high and extreme error values, that typically correspond to high and extreme streamflow values. This is appropriate when, like in the majority of streamflow forecasting applications, the focus is on the ability to reproduce potentially dangerous flood events; on the contrary, if the aim of the modelling is the reproduction of low and average flows, as it is the case in water resource management problems, this may result in a deterioration of the forecasting performance. This contribution presents the results of a series of automatic calibration experiments of a continuously-simulating rainfall-runoff model applied over several real-world case-studies, where the objective function is chosen so to highlight the fit of average and low flows. In this work a simple conceptual model will be used, of the lumped type, with a relatively low number of parameters to be calibrated. The experiments will be carried out for a set of case-study watersheds in Central Italy, covering an extremely wide range of geo-morphologic conditions and for whom at least five years of contemporary daily series of streamflow, precipitation and evapotranspiration estimates are available. Different objective functions will be tested in calibration and the results will be compared, over validation data, against those obtained with traditional squared functions. A companion work presents the results, over the same case-study watersheds and observation periods, of a system-theoretic model, again calibrated for reproducing average and low streamflows.

  10. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Evaluation and modification of five techniques for estimating stormwater runoff for watersheds in west-central Florida

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.

    1996-01-01

    Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea

  12. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  13. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  14. SU-F-T-465: Two Years of Radiotherapy Treatments Analyzed Through MLC Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Defoor, D; Kabat, C; Papanikolaou, N

    Purpose: To present treatment statistics of a Varian Novalis Tx using more than 90,000 Varian Dynalog files collected over the past 2 years. Methods: Varian Dynalog files are recorded for every patient treated on our Varian Novalis Tx. The files are collected and analyzed daily to check interfraction agreement of treatment deliveries. This is accomplished by creating fluence maps from the data contained in the Dynalog files. From the Dynalog files we have also compiled statistics for treatment delivery times, MLC errors, gantry errors and collimator errors. Results: The mean treatment time for VMAT patients was 153 ± 86 secondsmore » while the mean treatment time for step & shoot was 256 ± 149 seconds. Patient’s treatment times showed a variation of 0.4% over there treatment course for VMAT and 0.5% for step & shoot. The average field sizes were 40 cm2 and 26 cm2 for VMAT and step & shoot respectively. VMAT beams contained and average overall leaf travel of 34.17 meters and step & shoot beams averaged less than half of that at 15.93 meters. When comparing planned and delivered fluence maps generated using the Dynalog files VMAT plans showed an average gamma passing percentage of 99.85 ± 0.47. Step & shoot plans showed an average gamma passing percentage of 97.04 ± 0.04. 5.3% of beams contained an MLC error greater than 1 mm and 2.4% had an error greater than 2mm. The mean gantry speed for VMAT plans was 1.01 degrees/s with a maximum of 6.5 degrees/s. Conclusion: Varian Dynalog files are useful for monitoring machine performance treatment parameters. The Dynalog files have shown that the performance of the Novalis Tx is consistent over the course of a patients treatment with only slight variations in patient treatment times and a low rate of MLC errors.« less

  15. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  16. Analysis of quantum error correction with symmetric hypergraph states

    NASA Astrophysics Data System (ADS)

    Wagner, T.; Kampermann, H.; Bruß, D.

    2018-03-01

    Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.

  17. The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors

    NASA Technical Reports Server (NTRS)

    Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan

    1993-01-01

    Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.

  18. Portable global positioning system receivers: static validity and environmental conditions.

    PubMed

    Duncan, Scott; Stewart, Tom I; Oliver, Melody; Mavoa, Suzanne; MacRae, Deborah; Badland, Hannah M; Duncan, Mitch J

    2013-02-01

    GPS receivers are becoming increasingly common as an objective measure of spatiotemporal movement in free-living populations; however, research into the effects of the surrounding physical environment on the accuracy of off-the-shelf GPS receivers is limited. The goal of the current study was to (1) determine the static validity of seven portable GPS receiver models under diverse environmental conditions and (2) compare the battery life and signal acquisition times among the models. Seven GPS models (three units of each) were placed on six geodetic sites subject to a variety of environmental conditions (e.g., open sky, high-rise buildings) on three separate occasions. The observed signal acquisition time and battery life of each unit were compared to advertised specifications. Data were collected and analyzed in June 2012. Substantial variation in positional error was observed among the seven GPS models, ranging from 12.1 ± 19.6 m to 58.8 ± 393.2 m when averaged across the three test periods and six geodetic sites. Further, mean error varied considerably among sites: the lowest error occurred at the site under open sky (7.3 ± 27.7 m), with the highest error at the site situated between high-rise buildings (59.2 ± 99.2 m). While observed signal acquisition times were generally longer than advertised, the differences between observed and advertised battery life were less pronounced. Results indicate that portable GPS receivers are able to accurately monitor static spatial location in unobstructed but not obstructed conditions. It also was observed that signal acquisition times were generally underestimated in advertised specifications. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  19. The Whole Warps the Sum of Its Parts.

    PubMed

    Corbett, Jennifer E

    2017-01-01

    The efficiency of averaging properties of sets without encoding redundant details is analogous to gestalt proposals that perception is parsimoniously organized as a function of recurrent order in the world. This similarity suggests that grouping and averaging are part of a broader set of strategies allowing the visual system to circumvent capacity limitations. To examine how gestalt grouping affects the manner in which information is averaged and remembered, I compared the error in observers' adjustments of remembered sizes of individual circles in two different mean-size sets defined by similarity, proximity, connectedness, or a common region. Overall, errors were more similar within the same gestalt-defined groups than between different gestalt-defined groups, such that the remembered sizes of individual circles were biased toward the mean size of their respective gestalt-defined groups. These results imply that gestalt grouping facilitates perceptual averaging to minimize the error with which individual items are encoded, thereby optimizing the efficiency of visual short-term memory.

  20. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  1. Monthly mean simulation experiments with a course-mesh global atmospheric model

    NASA Technical Reports Server (NTRS)

    Spar, J.; Klugman, R.; Lutz, R. J.; Notario, J. J.

    1978-01-01

    Substitution of observed monthly mean sea-surface temperatures (SSTs) as lower boundary conditions, in place of climatological SSTs, failed to improve the model simulations. While the impact of SST anomalies on the model output is greater at sea level than at upper levels the impact on the monthly mean simulations is not beneficial at any level. Shifts of one and two days in initialization time produced small, but non-trivial, changes in the model-generated monthly mean synoptic fields. No improvements in the mean simulations resulted from the use of either time-averaged initial data or re-initialization with time-averaged early model output. The noise level of the model, as determined from a multiple initial state perturbation experiment, was found to be generally low, but with a noisier response to initial state errors in high latitudes than the tropics.

  2. First clinical experience in carbon ion scanning beam therapy: retrospective analysis of patient positional accuracy.

    PubMed

    Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi

    2012-09-01

    Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.

  3. Effectiveness of the New Hampshire stream-gaging network in providing regional streamflow information

    USGS Publications Warehouse

    Olson, Scott A.

    2003-01-01

    The stream-gaging network in New Hampshire was analyzed for its effectiveness in providing regional information on peak-flood flow, mean-flow, and low-flow frequency. The data available for analysis were from stream-gaging stations in New Hampshire and selected stations in adjacent States. The principles of generalized-least-squares regression analysis were applied to develop regional regression equations that relate streamflow-frequency characteristics to watershed characteristics. Regression equations were developed for (1) the instantaneous peak flow with a 100-year recurrence interval, (2) the mean-annual flow, and (3) the 7-day, 10-year low flow. Active and discontinued stream-gaging stations with 10 or more years of flow data were used to develop the regression equations. Each stream-gaging station in the network was evaluated and ranked on the basis of how much the data from that station contributed to the cost-weighted sampling-error component of the regression equation. The potential effect of data from proposed and new stream-gaging stations on the sampling error also was evaluated. The stream-gaging network was evaluated for conditions in water year 2000 and for estimated conditions under various network strategies if an additional 5 years and 20 years of streamflow data were collected. The effectiveness of the stream-gaging network in providing regional streamflow information could be improved for all three flow characteristics with the collection of additional flow data, both temporally and spatially. With additional years of data collection, the greatest reduction in the average sampling error of the regional regression equations was found for the peak- and low-flow characteristics. In general, additional data collection at stream-gaging stations with unregulated flow, relatively short-term record (less than 20 years), and drainage areas smaller than 45 square miles contributed the largest cost-weighted reduction to the average sampling error of the regional estimating equations. The results of the network analyses can be used to prioritize the continued operation of active stations, the reactivation of discontinued stations, or the activation of new stations to maximize the regional information content provided by the stream-gaging network. Final decisions regarding altering the New Hampshire stream-gaging network would require the consideration of the many uses of the streamflow data serving local, State, and Federal interests.

  4. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    ERIC Educational Resources Information Center

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  5. Sensitivity of CONUS Summer Rainfall to the Selection of Cumulus Parameterization Schemes in NU-WRF Seasonal Simulations

    NASA Technical Reports Server (NTRS)

    Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert; hide

    2017-01-01

    This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.

  6. Multi-year objective analyses of warm season ground-level ozone and PM2.5 over North America using real-time observations and Canadian operational air quality models

    NASA Astrophysics Data System (ADS)

    Robichaud, A.; Ménard, R.

    2014-02-01

    Multi-year objective analyses (OA) on a high spatiotemporal resolution for the warm season period (1 May to 31 October) for ground-level ozone and for fine particulate matter (diameter less than 2.5 microns (PM2.5)) are presented. The OA used in this study combines model outputs from the Canadian air quality forecast suite with US and Canadian observations from various air quality surface monitoring networks. The analyses are based on an optimal interpolation (OI) with capabilities for adaptive error statistics for ozone and PM2.5 and an explicit bias correction scheme for the PM2.5 analyses. The estimation of error statistics has been computed using a modified version of the Hollingsworth-Lönnberg (H-L) method. The error statistics are "tuned" using a χ2 (chi-square) diagnostic, a semi-empirical procedure that provides significantly better verification than without tuning. Successful cross-validation experiments were performed with an OA setup using 90% of data observations to build the objective analyses and with the remainder left out as an independent set of data for verification purposes. Furthermore, comparisons with other external sources of information (global models and PM2.5 satellite surface-derived or ground-based measurements) show reasonable agreement. The multi-year analyses obtained provide relatively high precision with an absolute yearly averaged systematic error of less than 0.6 ppbv (parts per billion by volume) and 0.7 μg m-3 (micrograms per cubic meter) for ozone and PM2.5, respectively, and a random error generally less than 9 ppbv for ozone and under 12 μg m-3 for PM2.5. This paper focuses on two applications: (1) presenting long-term averages of OA and analysis increments as a form of summer climatology; and (2) analyzing long-term (decadal) trends and inter-annual fluctuations using OA outputs. The results show that high percentiles of ozone and PM2.5 were both following a general decreasing trend in North America, with the eastern part of the United States showing the most widespread decrease, likely due to more effective pollution controls. Some locations, however, exhibited an increasing trend in the mean ozone and PM2.5, such as the northwestern part of North America (northwest US and Alberta). Conversely, the low percentiles are generally rising for ozone, which may be linked to the intercontinental transport of increased emissions from emerging countries. After removing the decadal trend, the inter-annual fluctuations of the high percentiles are largely explained by the temperature fluctuations for ozone and to a lesser extent by precipitation fluctuations for PM2.5. More interesting is the economic short-term change (as expressed by the variation of the US gross domestic product growth rate), which explains 37% of the total variance of inter-annual fluctuations of PM2.5 and 15% in the case of ozone.

  7. Cost effectiveness of the stream-gaging program in South Carolina

    USGS Publications Warehouse

    Barker, A.C.; Wright, B.C.; Bennett, C.S.

    1985-01-01

    The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)

  8. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  9. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  10. Beat-to-beat heart rate estimation fusing multimodal video and sensor data

    PubMed Central

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-01-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference. PMID:26309754

  11. Beat-to-beat heart rate estimation fusing multimodal video and sensor data.

    PubMed

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-08-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference.

  12. Parameter Estimation as a Problem in Statistical Thermodynamics.

    PubMed

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  13. Neural activity during affect labeling predicts expressive writing effects on well-being: GLM and SVM approaches.

    PubMed

    Memarian, Negar; Torre, Jared B; Haltom, Kate E; Stanton, Annette L; Lieberman, Matthew D

    2017-09-01

    Affect labeling (putting feelings into words) is a form of incidental emotion regulation that could underpin some benefits of expressive writing (i.e. writing about negative experiences). Here, we show that neural responses during affect labeling predicted changes in psychological and physical well-being outcome measures 3 months later. Furthermore, neural activity of specific frontal regions and amygdala predicted those outcomes as a function of expressive writing. Using supervised learning (support vector machines regression), improvements in four measures of psychological and physical health (physical symptoms, depression, anxiety and life satisfaction) after an expressive writing intervention were predicted with an average of 0.85% prediction error [root mean square error (RMSE) %]. The predictions were significantly more accurate with machine learning than with the conventional generalized linear model method (average RMSE: 1.3%). Consistent with affect labeling research, right ventrolateral prefrontal cortex (RVLPFC) and amygdalae were top predictors of improvement in the four outcomes. Moreover, RVLPFC and left amygdala predicted benefits due to expressive writing in satisfaction with life and depression outcome measures, respectively. This study demonstrates the substantial merit of supervised machine learning for real-world outcome prediction in social and affective neuroscience. © The Author (2017). Published by Oxford University Press.

  14. Spatial and temporal variability of fine particle composition and source types in five cities of Connecticut and Massachusetts

    PubMed Central

    Lee, Hyung Joo; Gent, Janneane F.; Leaderer, Brian P.; Koutrakis, Petros

    2011-01-01

    To protect public health from PM2.5 air pollution, it is critical to identify the source types of PM2.5 mass and chemical components associated with higher risks of adverse health outcomes. Source apportionment modeling using Positive Matrix Factorization (PMF), was used to identify PM2.5 source types and quantify the source contributions to PM2.5 in five cities of Connecticut and Massachusetts. Spatial and temporal variability of PM2.5 mass, components and source contributions were investigated. PMF analysis identified five source types: regional pollution as traced by sulfur, motor vehicle, road dust, oil combustion and sea salt. The sulfur-related regional pollution and traffic source type were major contributors to PM2.5. Due to sparse ground-level PM2.5 monitoring sites, current epidemiological studies are susceptible to exposure measurement errors. The higher correlations in concentrations and source contributions between different locations suggest less spatial variability, resulting in less exposure measurement errors. When concentrations and/or contributions were compared to regional averages, correlations were generally higher than between-site correlations. This suggests that for assigning exposures for health effects studies, using regional average concentrations or contributions from several PM2.5 monitors is more reliable than using data from the nearest central monitor. PMID:21429560

  15. Determination of the carmine content based on spectrum fluorescence spectral and PSO-SVM

    NASA Astrophysics Data System (ADS)

    Wang, Shu-tao; Peng, Tao; Cheng, Qi; Wang, Gui-chuan; Kong, De-ming; Wang, Yu-tian

    2018-03-01

    Carmine is a widely used food pigment in various food and beverage additives. Excessive consumption of synthetic pigment shall do harm to body seriously. The food is generally associated with a variety of colors. Under the simulation context of various food pigments' coexistence, we adopted the technology of fluorescence spectroscopy, together with the PSO-SVM algorithm, so that to establish a method for the determination of carmine content in mixed solution. After analyzing the prediction results of PSO-SVM, we collected a bunch of data: the carmine average recovery rate was 100.84%, the root mean square error of prediction (RMSEP) for 1.03e-04, 0.999 for the correlation coefficient between the model output and the real value of the forecast. Compared with the prediction results of reverse transmission, the correlation coefficient of PSO-SVM was 2.7% higher, the average recovery rate for 0.6%, and the root mean square error was nearly one order of magnitude lower. According to the analysis results, it can effectively avoid the interference caused by pigment with the combination of the fluorescence spectrum technique and PSO-SVM, accurately determining the content of carmine in mixed solution with an effect better than that of BP.

  16. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less

  17. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  18. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects

    PubMed Central

    Wells, Jered R.; Dobbins, James T.

    2012-01-01

    Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm−1) and approximate circular symmetry at frequencies below 4 mm−1. While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm−1. Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm × 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm−1) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Conclusions: Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation. PMID:23039654

  19. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, Jered R.; Dobbins, James T. III; Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705

    2012-10-15

    Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1Dmore » test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm{sup -1}) and approximate circular symmetry at frequencies below 4 mm{sup -1}. While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm{sup -1}. Slit measurement near 45 Degree-Sign revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm Multiplication-Sign 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm{sup -1}) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Conclusions: Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation.« less

  20. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects.

    PubMed

    Wells, Jered R; Dobbins, James T

    2012-10-01

    The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ∕i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm(-1)) and approximate circular symmetry at frequencies below 4 mm(-1). While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm(-1). Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm × 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm(-1)) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation.

  1. Multi-factorial analysis of class prediction error: estimating optimal number of biomarkers for various classification rules.

    PubMed

    Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter

    2010-12-01

    Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).

  2. A new approach for turbulent simulations in complex geometries

    NASA Astrophysics Data System (ADS)

    Israel, Daniel M.

    Historically turbulence modeling has been sharply divided into Reynolds averaged Navier-Stokes (RANS), in which all the turbulent scales of motion are modeled, and large-eddy simulation (LES), in which only a portion of the turbulent spectrum is modeled. In recent years there have been numerous attempts to couple these two approaches either by patching RANS and LES calculations together (zonal methods) or by blending the two sets of equations. In order to create a proper bridging model, that is, a single set of equations which captures both RANS and LES like behavior, it is necessary to place both RANS and LES in a more general framework. The goal of the current work is threefold: to provide such a framework, to demonstrate how the Flow Simulation Methodology (FSM) fits into this framework, and to evaluate the strengths and weaknesses of the current version of the FSM. To do this, first a set of filtered Navier-Stokes (FNS) equations are introduced in terms of an arbitrary generalized filter. Additional exact equations are given for the second order moments and the generalized subfilter dissipation rate tensor. This is followed by a discussion of the role of implicit and explicit filters in turbulence modeling. The FSM is then described with particular attention to its role as a bridging model. In order to evaluate the method a specific implementation of the FSM approach is proposed. Simulations are presented using this model for the case of a separating flow over a "hump" with and without flow control. Careful attention is paid to error estimation, and, in particular, how using flow statistics and time series affects the error analysis. Both mean flow and Reynolds stress profiles are presented, as well as the phase averaged turbulent structures and wall pressure spectra. Using the phase averaged data it is possible to examine how the FSM partitions the energy between the coherent resolved scale motions, the random resolved scale fluctuations, and the subfilter quantities. The method proves to be qualitatively successful at reproducing large turbulent structures. However, like other hybrid methods, it has difficulty in the region where the model behavior transitions from RANS to LES. Consequently the phase averaged structures reproduce the experiments quite well, and the forcing does significantly reduce the length of the separated region. Nevertheless, the recirculation length is significantly too large for all the cases. Overall the current results demonstrate the promise of bridging models in general and the FSM in particular. However, current bridging techniques are still in their infancy. There is still important progress to be made and it is hoped that this work points out the more important avenues for exploration.

  3. The influence of non-rigid anatomy and patient positioning on endoscopy-CT image registration in the head and neck.

    PubMed

    Ingram, W Scott; Yang, Jinzhong; Wendt, Richard; Beadle, Beth M; Rao, Arvind; Wang, Xin A; Court, Laurence E

    2017-08-01

    To assess the influence of non-rigid anatomy and differences in patient positioning between CT acquisition and endoscopic examination on endoscopy-CT image registration in the head and neck. Radiotherapy planning CTs and 31-35 daily treatment-room CTs were acquired for nineteen patients. Diagnostic CTs were acquired for thirteen of the patients. The surfaces of the airways were segmented on all scans and triangular meshes were created to render virtual endoscopic images with a calibrated pinhole model of an endoscope. The virtual images were used to take projective measurements throughout the meshes, with reference measurements defined as those taken on the planning CTs and test measurements defined as those taken on the daily or diagnostic CTs. The influence of non-rigid anatomy was quantified by 3D distance errors between reference and test measurements on the daily CTs, and the influence of patient positioning was quantified by 3D distance errors between reference and test measurements on the diagnostic CTs. The daily CT measurements were also used to investigate the influences of camera-to-surface distance, surface angle, and the interval of time between scans. Average errors in the daily CTs were 0.36 ± 0.61 cm in the nasal cavity, 0.58 ± 0.83 cm in the naso- and oropharynx, and 0.47 ± 0.73 cm in the hypopharynx and larynx. Average errors in the diagnostic CTs in those regions were 0.52 ± 0.69 cm, 0.65 ± 0.84 cm, and 0.69 ± 0.90 cm, respectively. All CTs had errors heavily skewed towards 0, albeit with large outliers. Large camera-to-surface distances were found to increase the errors, but the angle at which the camera viewed the surface had no effect. The errors in the Day 1 and Day 15 CTs were found to be significantly smaller than those in the Day 30 CTs (P < 0.05). Inconsistencies of patient positioning have a larger influence than non-rigid anatomy on projective measurement errors. In general, these errors are largest when the camera is in the superior pharynx, where it sees large distances and a lot of muscle motion. The errors are larger when the interval of time between CT acquisitions is longer, which suggests that the interval of time between the CT acquisition and the endoscopic examination should be kept short. The median errors found in this study are comparable to acceptable levels of uncertainty in deformable CT registration. Large errors are possible even when image alignment is very good, indicating that projective measurements must be made carefully to avoid these outliers. © 2017 American Association of Physicists in Medicine.

  4. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  5. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  6. Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin

    USGS Publications Warehouse

    Walker, J.F.; Osen, L.L.; Hughes, P.E.

    1987-01-01

    A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%. 

  7. Topographic analysis of individual activation patterns in medial frontal cortex in schizophrenia

    PubMed Central

    Stern, Emily R.; Welsh, Robert C.; Fitzgerald, Kate D.; Taylor, Stephan F.

    2009-01-01

    Individual variability in the location of neural activations poses a unique problem for neuroimaging studies employing group averaging techniques to investigate the neural bases of cognitive and emotional functions. This may be especially challenging for studies examining patient groups, which often have limited sample sizes and increased intersubject variability. In particular, medial frontal cortex (MFC) dysfunction is thought to underlie performance monitoring dysfunction among patients with previous studies using group averaging to have yielded conflicting results. schizophrenia, yet compare schizophrenic patients to controls To examine individual activations in MFC associated with two aspects of performance monitoring, interference and error processing, functional magnetic resonance imaging (fMRI) data were acquired while 17 patients with schizophrenia and 21 healthy controls performed an event-related version of the multi-source interference task. Comparisons of averaged data revealed few differences between the groups. By contrast, topographic analysis of individual activations for errors showed that control subjects exhibited activations spanning across both posterior and anterior regions of MFC while patients primarily activated posterior MFC, possibly reflecting an impaired emotional response to errors in schizophrenia. This discrepancy between topographic and group-averaged results may be due to the significant dispersion among individual activations, particularly among healthy controls, highlighting the importance of considering intersubject variability when interpreting the medial frontal response to error commission. PMID:18819107

  8. The effect of saccade metrics on the corollary discharge contribution to perceived eye location

    PubMed Central

    Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.

    2015-01-01

    Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955

  9. Automated River Reach Definition Strategies: Applications for the Surface Water and Ocean Topography Mission

    NASA Astrophysics Data System (ADS)

    Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André

    2017-10-01

    The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.

  10. Refractive errors and strabismus in Down's syndrome in Korea.

    PubMed

    Han, Dae Heon; Kim, Kyun Hyung; Paik, Hae Jung

    2012-12-01

    The aims of this study were to examine the distribution of refractive errors and clinical characteristics of strabismus in Korean patients with Down's syndrome. A total of 41 Korean patients with Down's syndrome were screened for strabismus and refractive errors in 2009. A total of 41 patients with an average age of 11.9 years (range, 2 to 36 years) were screened. Eighteen patients (43.9%) had strabismus. Ten (23.4%) of 18 patients exhibited esotropia and the others had intermittent exotropia. The most frequently detected type of esotropia was acquired non-accommodative esotropia, and that of exotropia was the basic type. Fifteen patients (36.6%) had hypermetropia and 20 (48.8%) had myopia. The patients with esotropia had refractive errors of +4.89 diopters (D, ±3.73) and the patients with exotropia had refractive errors of -0.31 D (±1.78). Six of ten patients with esotropia had an accommodation weakness. Twenty one patients (63.4%) had astigmatism. Eleven (28.6%) of 21 patients had anisometropia and six (14.6%) of those had clinically significant anisometropia. In Korean patients with Down's syndrome, esotropia was more common than exotropia and hypermetropia more common than myopia. Especially, Down's syndrome patients with esotropia generally exhibit clinically significant hyperopic errors (>+3.00 D) and evidence of under-accommodation. Thus, hypermetropia and accommodation weakness could be possible factors in esotropia when it occurs in Down's syndrome patients. Based on the results of this study, eye examinations of Down's syndrome patients should routinely include a measure of accommodation at near distances, and bifocals should be considered for those with evidence of under-accommodation.

  11. Errors associated with IOLMaster biometry as a function of internal ocular dimensions.

    PubMed

    Faria-Ribeiro, Miguel; Lopes-Ferreira, Daniela; López-Gil, Norberto; Jorge, Jorge; González-Méijome, José Manuel

    2014-01-01

    To evaluate the error in the estimation of axial length (AL) with the IOLMaster partial coherence interferometry (PCI) biometer and obtain a correction factor that varies as a function of AL and crystalline lens thickness (LT). Optical simulations were produced for theoretical eyes using Zemax-EE software. Thirty-three combinations including eleven different AL (from 20mm to 30mm in 1mm steps) and three different LT (3.6mm, 4.2mm and 4.8mm) were used. Errors were obtained comparing the AL measured for a constant equivalent refractive index of 1.3549 and for the actual combinations of indices and intra-ocular dimensions of LT and AL in each model eye. In the range from 20mm to 30mm AL and 3.6-4.8mm LT, the instrument measurements yielded an error between -0.043mm and +0.089mm. Regression analyses for the three LT condition were combined in order to derive a correction factor as a function of the instrument measured AL for each combination of AL and LT in the theoretical eye. The assumption of a single "average" refractive index in the estimation of AL by the IOLMaster PCI biometer only induces very small errors in a wide range of combinations of ocular dimensions. Even so, the accurate estimation of those errors may help to improve accuracy of intra-ocular lens calculations through exact ray tracing, particularly in longer eyes and eyes with thicker or thinner crystalline lenses. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  12. Individual Radiological Protection Monitoring of Utrok Atoll Residents Based on Whole Body Counting of Cesium-137 (137Cs) and Plutonium Bioassay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, T; Kehl, S; Brown, T

    2007-06-08

    This report contains individual radiological protection surveillance data developed during 2006 for adult members of a select group of families living on Utrok Atoll. These Group I volunteers all underwent a whole-body count to determine levels of internally deposited cesium-137 ({sup 137}Cs) and supplied a bioassay sample for analysis of plutonium isotopes. Measurement data were obtained and the results compared with an equivalent set of measurement data for {sup 137}Cs and plutonium isotopes from a second group of adult volunteers (Group II) who were long-term residents of Utrok Atoll. For the purposes of this comparison, Group II volunteers were consideredmore » representative of the general population on Utrok Atoll. The general aim of the study was to determine residual systemic burdens of fallout radionuclides in each volunteer group, develop data in response to addressing some specific concerns about the preferential uptake and potential health consequences of residual fallout radionuclides in Group I volunteers, and generally provide some perspective on the significance of radiation doses delivered to volunteers (and the general Utrok Atoll resident population) in terms of radiological protection standards and health risks. Based on dose estimates from measurements of internally deposited {sup 137}Cs and plutonium isotopes, the data and information developed in this report clearly show that neither volunteer group has acquired levels of internally deposited fallout radionuclides specific to nuclear weapons testing in the Marshall Islands that are likely to have any consequence on human health. Moreover, the dose estimates are well below radiological protection standards as prescribed by U.S. regulators and international agencies, and are very small when compared to doses from natural sources of radiation in the Marshall Islands and the threshold where radiation health effects could be either medically diagnosed in an individual or epidemiologically discerned in a group of people. In general, the results from the whole-body counting measurements of 137Cs are consistent with our knowledge that a key pathway for exposure to residual fallout contamination on Utrok Atoll is low-level chronic uptake of {sup 137}Cs from the consumption of locally grown produce (Robison et al., 1999). The error-weighted, average body burden of {sup 137}Cs measured in Group I and Group II volunteers was 0.31 kBq and 0.62 kBq, respectively. The associated average, annual committed effective dose equivalent (CEDE) delivered to Group I and Group II volunteers from {sup 137}Cs during the year of measurement was 2.1 and 4.0 mrem. For comparative purposes, the annual dose limit for members of the public as recommended by the National Council on Radiation Protection and Measurements (NCRP) and the International Commission on Radiological Protection (ICRP) is 100 mrem. Consequently, specific concerns about elevated levels of {sup 137}Cs uptake and higher risks from radiation exposure to Group I volunteers would be considered unfounded. Moreover, the urinary excretion of plutonium-239 ({sup 239}Pu) from Group I and Group II volunteers is statistically indistinguishable. In this case, the error-weighted, average urinary excretion of {sup 239}Pu from Group I volunteers of 0.10 {mu}Bq per 24-h void with a range between -0.01 and 0.23 {mu}Bq per 24-h void compares with an error-weighted average from Group II volunteers of 0.11 {mu}Bq per 24-h void with a range between -0.20 and 0.47 {mu}Bq per 24-h void. The range in urinary excretion of {sup 239}Pu from Utrok Atoll residents is very similar to that observed for other population groups in the Marshall Islands (Bogen et al., 2006; Hamilton et al., 2006a; 2006b; 2006c, 2007a; 2007b; 2007c) and is generally considered representative of worldwide background.« less

  13. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  14. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  15. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  16. WE-D-18A-01: Evaluation of Three Commercial Metal Artifact Reduction Methods for CT Simulations in Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Kerns, J; Nute, J

    Purpose: To evaluate three commercial metal artifact reduction methods (MAR) in the context of radiation therapy treatment planning. Methods: Three MAR strategies were evaluated: Philips O-MAR, monochromatic imaging using Gemstone Spectral Imaging (GSI) dual energy CT, and monochromatic imaging with metal artifact reduction software (GSIMARs). The Gammex RMI 467 tissue characterization phantom with several metal rods and two anthropomorphic phantoms (pelvic phantom with hip prosthesis and head phantom with dental fillings), were scanned with and without (baseline) metals. Each MAR method was evaluated based on CT number accuracy, metal size accuracy, and reduction in the severity of streak artifacts. CTmore » number difference maps between the baseline and metal scan images were calculated, and the severity of streak artifacts was quantified using the percentage of pixels with >40 HU error (“bad pixels”). Results: Philips O-MAR generally reduced HU errors in the RMI phantom. However, increased errors and induced artifacts were observed for lung materials. GSI monochromatic 70keV images generally showed similar HU errors as 120kVp imaging, while 140keV images reduced errors. GSI-MARs systematically reduced errors compared to GSI monochromatic imaging. All imaging techniques preserved the diameter of a stainless steel rod to within ±1.6mm (2 pixels). For the hip prosthesis, O-MAR reduced the average % bad pixels from 47% to 32%. For GSI 140keV imaging, the percent of bad pixels was reduced from 37% to 29% compared to 120kVp imaging, while GSI-MARs further reduced it to 12%. For the head phantom, none of the MAR methods were particularly successful. Conclusion: The three MAR methods all improve CT images for treatment planning to some degree, but none of them are globally effective for all conditions. The MAR methods were successful for large metal implants in a homogeneous environment (hip prosthesis) but were not successful for the more complicated case of dental artifacts.« less

  17. Jitter compensation circuit

    DOEpatents

    Sullivan, James S.; Ball, Don G.

    1997-01-01

    The instantaneous V.sub.co signal on a charging capacitor is sampled and the charge voltage on capacitor C.sub.o is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V.sub. co signal is split between a gain stage (G=0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V.sub.co signal is applied to the negative input of a differential amplifier gain stage (G=10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V.sub.co signal from the instantaneous value of sampled V.sub.co signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V.sub.co values squared divided by the total volt-second product of the magnetic compression circuit.

  18. Jitter compensation circuit

    DOEpatents

    Sullivan, J.S.; Ball, D.G.

    1997-09-09

    The instantaneous V{sub co} signal on a charging capacitor is sampled and the charge voltage on capacitor C{sub o} is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V{sub co} signal is split between a gain stage (G = 0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V{sub co} signal is applied to the negative input of a differential amplifier gain stage (G = 10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V{sub co} signal from the instantaneous value of sampled V{sub co} signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V{sub co} values squared divided by the total volt-second product of the magnetic compression circuit. 11 figs.

  19. Improved estimation of anomalous diffusion exponents in single-particle tracking experiments

    NASA Astrophysics Data System (ADS)

    Kepten, Eldad; Bronshtein, Irena; Garini, Yuval

    2013-05-01

    The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.

  20. Field Comparison between Sling Psychrometer and Meteorological Measuring Set AN/TMQ-22

    DTIC Science & Technology

    the ML-224 Sling Psychrometer . From a series of independent tests designed to minimize error it was concluded that the AN/TMQ-22 yielded a more accurate...dew point reading. The average relative humidity error using the sling psychrometer was +9% while the AN/TMQ-22 had a plus or minus 2% error. Even with cautious measurement the sling yielded a +4% error.

  1. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  2. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    PubMed

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.

  3. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning.

    PubMed

    Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin

    2012-01-01

    The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.

  4. Interpreting the Latitudinal Structure of Differences Between Modeled and Observed Temperature Trends (Invited)

    NASA Astrophysics Data System (ADS)

    Santer, B. D.; Mears, C. A.; Gleckler, P. J.; Solomon, S.; Wigley, T.; Arblaster, J.; Cai, W.; Gillett, N. P.; Ivanova, D. P.; Karl, T. R.; Lanzante, J.; Meehl, G. A.; Stott, P.; Taylor, K. E.; Thorne, P.; Wehner, M. F.; Zou, C.

    2010-12-01

    We perform the most comprehensive comparison to date of simulated and observed temperature trends. Comparisons are made for different latitude bands, timescales, and temperature variables, using information from a multi-model archive and a variety of observational datasets. Our focus is on temperature changes in the lower troposphere (TLT), the mid- to upper troposphere (TMT), and at the sea surface (SST). For SST, TLT, and TMT, trend comparisons over the satellite era (1979 to 2009) always yield closest agreement in mid-latitudes of the Northern Hemisphere. There are pronounced discrepancies in the tropics and in the Southern Hemisphere: in both regions, the multi-model average warming is consistently larger than observed. At high latitudes in the Northern Hemisphere, the observed tropospheric warming exceeds multi-model average trends. The similarity in the latitudinal structure of this discrepancy pattern across different temperature variables and observational data sets suggests that these trend differences are real, and are not due to residual inhomogeneities in the observations. The interpretation of these results is hampered by the fact that the CMIP-3 multi-model archive analyzed here convolves errors in key external forcings with errors in the model response to forcing. Under a "forcing error" interpretation, model-average temperature trends in the Southern Hemisphere extratropics are biased warm because many models neglect (and/or inaccurately specify) changes in stratospheric ozone and the indirect effects of aerosols. An alternative "response error" explanation for the model trend errors is that there are fundamental problems with model clouds and ocean heat uptake over the Southern Ocean. When SST changes are compared over the longer period 1950 to 2009, there is close agreement between simulated and observed trends poleward of 50°S. This result is difficult to reconcile with the hypothesis that the trend discrepancies over 1979 to 2009 are primarily attributable to response errors. Our results suggest that biases in multi-model average temperature trends over the satellite era can be plausibly linked to forcing errors. Better partitioning of the forcing and response components of model errors will require a systematic program of numerical experimentation, with a focus on exploring the climate response to uncertainties in key historical forcings.

  5. Center-to-Limb Variation of Deprojection Errors in SDO/HMI Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Falconer, David; Moore, Ronald; Barghouty, Nasser; Tiwari, Sanjiv K.; Khazanov, Igor

    2015-04-01

    For use in investigating the magnetic causes of coronal heating in active regions and for use in forecasting an active region’s productivity of major CME/flare eruptions, we have evaluated various sunspot-active-region magnetic measures (e.g., total magnetic flux, free-magnetic-energy proxies, magnetic twist measures) from HMI Active Region Patches (HARPs) after the HARP has been deprojected to disk center. From a few tens of thousand HARP vector magnetograms (of a few hundred sunspot active regions) that have been deprojected to disk center, we have determined that the errors in the whole-HARP magnetic measures from deprojection are negligibly small for HARPS deprojected from distances out to 45 heliocentric degrees. For some purposes the errors from deprojection are tolerable out to 60 degrees. We obtained this result by the following process. For each whole-HARP magnetic measure: 1) for each HARP disk passage, normalize the measured values by the measured value for that HARP at central meridian; 2) then for each 0.05 Rs annulus, average the values from all the HARPs in the annulus. This results in an average normalized value as a function of radius for each measure. Assuming no deprojection errors and that, among a large set of HARPs, the measure is as likely to decrease as to increase with HARP distance from disk center, the average of each annulus is expected to be unity, and, for a statistically large sample, the amount of deviation of the average from unity estimates the error from deprojection effects. The deprojection errors arise from 1) errors in the transverse field being deprojected into the vertical field for HARPs observed at large distances from disk center, 2) increasingly larger foreshortening at larger distances from disk center, and 3) possible errors in transverse-field-direction ambiguity resolution.From the compiled set of measured vales of whole-HARP magnetic nonpotentiality parameters measured from deprojected HARPs, we have examined the relation between each nonpotentiality parameter and the speed of CMEs from the measured active regions. For several different nonpotentiality parameters we find there is an upper limit to the CME speed, the limit increasing as the value of the parameter increases.

  6. Typing mineral deposits using their grades and tonnages in an artificial neural network

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    2003-01-01

    A test of the ability of a probabilistic neural network to classify deposits into types on the basis of deposit tonnage and average Cu, Mo, Ag, Au, Zn, and Pb grades is conducted. The purpose is to examine whether this type of system might serve as a basis for integrating geoscience information available in large mineral databases to classify sites by deposit type. Benefits of proper classification of many sites in large regions are relatively rapid identification of terranes permissive for deposit types and recognition of specific sites perhaps worthy of exploring further.Total tonnages and average grades of 1,137 well-explored deposits identified in published grade and tonnage models representing 13 deposit types were used to train and test the network. Tonnages were transformed by logarithms and grades by square roots to reduce effects of skewness. All values were scaled by subtracting the variable's mean and dividing by its standard deviation. Half of the deposits were selected randomly to be used in training the probabilistic neural network and the other half were used for independent testing. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class (type) and each variable (grade or tonnage).Deposit types were selected to challenge the neural network. For many types, tonnages or average grades are significantly different from other types, but individual deposits may plot in the grade and tonnage space of more than one type. Porphyry Cu, porphyry Cu-Au, and porphyry Cu-Mo types have similar tonnages and relatively small differences in grades. Redbed Cu deposits typically have tonnages that could be confused with porphyry Cu deposits, also contain Cu and, in some situations, Ag. Cyprus and kuroko massive sulfide types have about the same tonnages. Cu, Zn, Ag, and Au grades. Polymetallic vein, sedimentary exhalative Zn-Pb, and Zn-Pb skarn types contain many of the same metals. Sediment-hosted Au, Comstock Au-Ag, and low-sulfide Au-quartz vein types are principally Au deposits with differing amounts of Ag.Given the intent to test the neural network under the most difficult conditions, an overall 75% agreement between the experts and the neural network is considered excellent. Among the largestclassification errors are skarn Zn-Pb and Cyprus massive sulfide deposits classed by the neuralnetwork as kuroko massive sulfides—24 and 63% error respectively. Other large errors are the classification of 92% of porphyry Cu-Mo as porphyry Cu deposits. Most of the larger classification errors involve 25 or fewer training deposits, suggesting that some errors might be the result of small sample size. About 91% of the gold deposit types were classed properly and 98% of porphyry Cu deposits were classes as some type of porphyry Cu deposit. An experienced economic geologist would not make many of the classification errors that were made by the neural network because the geologic settings of deposits would be used to reduce errors. In a separate test, the probabilistic neural network correctly classed 93% of 336 deposits in eight deposit types when trained with presence or absence of 58 minerals and six generalized rock types. The overall success rate of the probabilistic neural network when trained on tonnage and average grades would probably be more than 90% with additional information on the presence of a few rock types.

  7. Performance of some numerical Laplace inversion methods on American put option formula

    NASA Astrophysics Data System (ADS)

    Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.

    2018-03-01

    Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.

  8. Awareness of Diagnostic Error among Japanese Residents: a Nationwide Study.

    PubMed

    Nishizaki, Yuji; Shinozaki, Tomohiro; Kinoshita, Kensuke; Shimizu, Taro; Tokuda, Yasuharu

    2018-04-01

    Residents' understanding of diagnostic error may differ between countries. We sought to explore the relationship between diagnostic error knowledge and self-study, clinical knowledge, and experience. Our nationwide study involved postgraduate year 1 and 2 (PGY-1 and -2) Japanese residents. The Diagnostic Error Knowledge Assessment Test (D-KAT) and General Medicine In-Training Examination (GM-ITE) were administered at the end of the 2014 academic year. D-KAT scores were compared with the benchmark scores of US residents. Associations between D-KAT score and gender, PGY, emergency department (ED) rotations per month, mean number of inpatients handled at any given time, and mean daily minutes of self-study were also analyzed, both with and without adjusting for GM-ITE scores. Student's t test was used for comparisons with linear mixed models and structural equation models (SEM) to explore associations with D-KAT or GM-ITE scores. The mean D-KAT score among Japanese PGY-2 residents was significantly lower than that of their US PGY-2 counterparts (6.2 vs. 8.3, p < 0.001). GM-ITE scores correlated with ED rotations (≥6 rotations: 2.14; 0.16-4.13; p = 0.03), inpatient caseloads (5-9 patients: 1.79; 0.82-2.76; p < 0.001), and average daily minutes of self-study (≥91 min: 2.05; 0.56-3.53; p = 0.01). SEM revealed that D-KAT scores were directly associated with GM-ITE scores (ß = 0.37, 95% CI: 0.34-0.41) and indirectly associated with ED rotations (ß = 0.06, 95% CI: 0.02-0.10), inpatient caseload (ß = 0.04, 95% CI: 0.003-0.08), and average daily minutes of study (ß = 0.13, 95% CI: 0.09-0.17). Knowledge regarding diagnostic error among Japanese residents was poor compared with that among US residents. D-KAT scores correlated strongly with GM-ITE scores, and the latter scores were positively associated with a greater number of ED rotations, larger caseload (though only up to 15 patients), and more time spent studying.

  9. Application of Molecular Dynamics Simulations in Molecular Property Prediction I: Density and Heat of Vaporization

    PubMed Central

    Wang, Junmei; Tingjun, Hou

    2011-01-01

    Molecular mechanical force field (FF) methods are useful in studying condensed phase properties. They are complementary to experiment and can often go beyond experiment in atomic details. Even a FF is specific for studying structures, dynamics and functions of biomolecules, it is still important for the FF to accurately reproduce the experimental liquid properties of small molecules that represent the chemical moieties of biomolecules. Otherwise, the force field may not describe the structures and energies of macromolecules in aqueous solutions properly. In this work, we have carried out a systematic study to evaluate the General AMBER Force Field (GAFF) in studying densities and heats of vaporization for a large set of organic molecules that covers the most common chemical functional groups. The latest techniques, such as the particle mesh Ewald (PME) for calculating electrostatic energies, and Langevin dynamics for scaling temperatures, have been applied in the molecular dynamics (MD) simulations. For density, the average percent error (APE) of 71 organic compounds is 4.43% when compared to the experimental values. More encouragingly, the APE drops to 3.43% after the exclusion of two outliers and four other compounds for which the experimental densities have been measured with pressures higher than 1.0 atm. For heat of vaporization, several protocols have been investigated and the best one, P4/ntt0, achieves an average unsigned error (AUE) and a root-mean-square error (RMSE) of 0.93 and 1.20 kcal/mol, respectively. How to reduce the prediction errors through proper van der Waals (vdW) parameterization has been discussed. An encouraging finding in vdW parameterization is that both densities and heats of vaporization approach their “ideal” values in a synchronous fashion when vdW parameters are tuned. The following hydration free energy calculation using thermodynamic integration further justifies the vdW refinement. We conclude that simple vdW parameterization can significantly reduce the prediction errors. We believe that GAFF can greatly improve its performance in predicting liquid properties of organic molecules after a systematic vdW parameterization, which will be reported in a separate paper. PMID:21857814

  10. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  11. Mapping global surface water inundation dynamics using synergistic information from SMAP, AMSR2 and Landsat

    NASA Astrophysics Data System (ADS)

    Du, J.; Kimball, J. S.; Galantowicz, J. F.; Kim, S.; Chan, S.; Reichle, R. H.; Jones, L. A.; Watts, J. D.

    2017-12-01

    A method to monitor global land surface water (fw) inundation dynamics was developed by exploiting the enhanced fw sensitivity of L-band (1.4 GHz) passive microwave observations from the Soil Moisture Active Passive (SMAP) mission. The L-band fw (fwLBand) retrievals were derived using SMAP H-polarization brightness temperature (Tb) observations and predefined L-band reference microwave emissivities for water and land endmembers. Potential soil moisture and vegetation contributions to the microwave signal were represented from overlapping higher frequency Tb observations from AMSR2. The resulting fwLBand global record has high temporal sampling (1-3 days) and 36-km spatial resolution. The fwLBand annual averages corresponded favourably (R=0.84, p<0.001) with a 250-m resolution static global water map (MOD44W) aggregated at the same spatial scale, while capturing significant inundation variations worldwide. The monthly fwLBand averages also showed seasonal inundation changes consistent with river discharge records within six major US river basins. An uncertainty analysis indicated generally reliable fwLBand performance for major land cover areas and under low to moderate vegetation cover, but with lower accuracy for detecting water bodies covered by dense vegetation. Finer resolution (30-m) fwLBand results were obtained for three sub-regions in North America using an empirical downscaling approach and ancillary global Water Occurrence Dataset (WOD) derived from the historical Landsat record. The resulting 30-m fwLBand retrievals showed favourable classification accuracy for water (commission error 31.84%; omission error 28.08%) and land (commission error 0.82%; omission error 0.99%) and seasonal wet and dry periods when compared to independent water maps derived from Landsat-8 imagery. The new fwLBand algorithms and continuing SMAP and AMSR2 operations provide for near real-time, multi-scale monitoring of global surface water inundation dynamics, potentially benefiting hydrological monitoring, flood assessments, and global climate and carbon modeling.

  12. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  13. Using video recording to identify management errors in pediatric trauma resuscitation.

    PubMed

    Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon

    2006-03-01

    To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.

  14. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  15. Groundwater recharge estimation in semi-arid zone: a study case from the region of Djelfa (Algeria)

    NASA Astrophysics Data System (ADS)

    Ali Rahmani, S. E.; Chibane, Brahim; Boucefiène, Abdelkader

    2017-09-01

    Deficiency of surface water resources in semi-arid area makes the groundwater the most preferred resource to assure population increased needs. In this research we are going to quantify the rate of groundwater recharge using new hybrid model tack in interest the annual rainfall and the average annual temperature and the geological characteristics of the area. This hybrid model was tested and calibrated using a chemical tracer method called Chloride mass balance method (CMB). This hybrid model is a combination between general hydrogeological model and a hydrological model. We have tested this model in an aquifer complex in the region of Djelfa (Algeria). Performance of this model was verified by five criteria [Nash, mean absolute error (MAE), Root mean square error (RMSE), the coefficient of determination and the arithmetic mean error (AME)]. These new approximations facilitate the groundwater management in semi-arid areas; this model is a perfection and amelioration of the model developed by Chibane et al. This model gives a very interesting result, with low uncertainty. A new recharge class diagram was established by our model to get rapidly and quickly the groundwater recharge value for any area in semi-arid region, using temperature and rainfall.

  16. Forecasting influenza in Hong Kong with Google search queries and statistical model fusion.

    PubMed

    Xu, Qinneng; Gel, Yulia R; Ramirez Ramirez, L Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung

    2017-01-01

    The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient.

  17. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  18. Comparison of algorithms for automatic border detection of melanoma in dermoscopy images

    NASA Astrophysics Data System (ADS)

    Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert

    2016-09-01

    Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.

  19. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  20. Flavour and identification threshold detection overview of Slovak adepts for certified testing.

    PubMed

    Vietoris, VladimIr; Barborova, Petra; Jancovicova, Jana; Eliasova, Lucia; Karvaj, Marian

    2016-07-01

    During certification process of sensory assessors of Slovak certification body we obtained results for basic taste thresholds and lifestyle habits. 500 adult people were screened during experiment with food industry background. For analysis of basic and non basic tastes, we used standardized procedure of ISO 8586-1:1993. In flavour test experiment, group of (26-35 y.o) produced the lowest error ratio (1.438), highest is (56+ y.o.) group with result (2.0). Average error value based on gender for women was (1.510) in comparison to men (1.477). People with allergies have the average error ratio (1.437) in comparison to people without allergies (1.511). Non-smokers produced less errors (1.484) against the smokers (1.576). Another flavour threshold identification test detected differences among age groups (by age are values increased). The highest number of errors made by men in metallic taste was (24%) the same as made by women (22%). Higher error ratio made by men occurred in salty taste (19%) against women (10%). Analysis detected some differences between allergic/non-allergic, smokers/non-smokers groups.

  1. The importance of temporal inequality in quantifying vegetated filter strip removal efficiencies

    NASA Astrophysics Data System (ADS)

    Gall, H. E.; Schultz, D.; Mejia, A.; Harman, C. J.; Raj, C.; Goslee, S.; Veith, T.; Patterson, P. H.

    2017-12-01

    Vegetated filter strips (VFSs) are best management practices (BMPs) commonly implemented adjacent to row-cropped fields to trap overland transport of sediment and other constituents often present in agricultural runoff. VFSs are generally reported to have high sediment removal efficiencies (i.e., 70 - 95%); however, these values are typically calculated as an average of removal efficiencies observed or simulated for individual events. We argue that due to: (i) positively correlated sediment concentration-discharge relationships; (ii) strong temporal inequality exhibited by sediment transport; and (iii) decreasing VFS performance with increasing flow rates, VFS removal efficiencies over annual time scales may be significantly lower than the per-event values or averages typically reported in the literature and used in decision-making models. By applying a stochastic approach to a two-component VFS model, we investigated the extent of the disparity between two calculation methods: averaging efficiencies from each event over the course of one year, versus reporting the total annual load reduction. We examined the effects of soil texture, concentration-discharge relationship, and VFS slope to reveal the potential errors that may be incurred by ignoring the effects of temporal inequality in quantifying VFS performance. Simulation results suggest that errors can be as low as < 2% and as high as > 20%, with the differences between the two methods of removal efficiency calculations greatest for: (i) soils with high percentage of fine particulates; (ii) VFSs with higher slopes; and (iii) strongly positive concentration-discharge relationships. These results can aid in annual-scale decision making for achieving downstream water quality goals.

  2. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation.

    PubMed

    Miao, Yinglong; Sinko, William; Pierce, Levi; Bucher, Denis; Walker, Ross C; McCammon, J Andrew

    2014-07-08

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20 k B T) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2-3 k B T). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼ k B T, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting "PyReweighting" is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/.

  3. Sediment reallocations due to erosive rainfall events in the Three Gorges Reservoir Area, Central China

    NASA Astrophysics Data System (ADS)

    Stumpf, Felix; Goebes, Philipp; Schmidt, Karsten; Schindewolf, Marcus; Schönbrodt-Stitt, Sarah; Wadoux, Alexandre; Xiang, Wei; Scholten, Thomas

    2017-04-01

    Soil erosion by water outlines a major threat to the Three Gorges Reservoir Area in China. A detailed assessment of soil conservation measures requires a tool that spatially identifies sediment reallocations due to rainfall-runoff events in catchments. We applied EROSION 3D as a physically based soil erosion and deposition model in a small mountainous catchment. Generally, we aim to provide a methodological frame that facilitates the model parametrization in a data scarce environment and to identify sediment sources and deposits. We used digital soil mapping techniques to generate spatially distributed soil property information for parametrization. For model calibration and validation, we continuously monitored the catchment on rainfall, runoff and sediment yield for a period of 12 months. The model performed well for large events (sediment yield>1 Mg) with an averaged individual model error of 7.5%, while small events showed an average error of 36.2%. We focused on the large events to evaluate reallocation patterns. Erosion occurred in 11.1% of the study area with an average erosion rate of 49.9Mgha 1. Erosion mainly occurred on crop rotation areas with a spatial proportion of 69.2% for 'corn-rapeseed' and 69.1% for 'potato-cabbage'. Deposition occurred on 11.0%. Forested areas (9.7%), infrastructure (41.0%), cropland (corn-rapeseed: 13.6%, potatocabbage: 11.3%) and grassland (18.4%) were affected by deposition. Because the vast majority of annual sediment yields (80.3%) were associated to a few large erosive events, the modelling approach provides a useful tool to spatially assess soil erosion control and conservation measures.

  4. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation

    PubMed Central

    2015-01-01

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20kBT) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2–3kBT). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼kBT, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting “PyReweighting” is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/. PMID:25061441

  5. Assessing the accuracy of ANFIS, EEMD-GRNN, PCR, and MLR models in predicting PM2.5

    NASA Astrophysics Data System (ADS)

    Ausati, Shadi; Amanollahi, Jamil

    2016-10-01

    Since Sanandaj is considered one of polluted cities of Iran, prediction of any type of pollution especially prediction of suspended particles of PM2.5, which are the cause of many diseases, could contribute to health of society by timely announcements and prior to increase of PM2.5. In order to predict PM2.5 concentration in the Sanandaj air the hybrid models consisting of an ensemble empirical mode decomposition and general regression neural network (EEMD-GRNN), Adaptive Neuro-Fuzzy Inference System (ANFIS), principal component regression (PCR), and linear model such as multiple liner regression (MLR) model were used. In these models the data of suspended particles of PM2.5 were the dependent variable and the data related to air quality including PM2.5, PM10, SO2, NO2, CO, O3 and meteorological data including average minimum temperature (Min T), average maximum temperature (Max T), average atmospheric pressure (AP), daily total precipitation (TP), daily relative humidity level of the air (RH) and daily wind speed (WS) for the year 2014 in Sanandaj were the independent variables. Among the used models, EEMD-GRNN model with values of R2 = 0.90, root mean square error (RMSE) = 4.9218 and mean absolute error (MAE) = 3.4644 in the training phase and with values of R2 = 0.79, RMSE = 5.0324 and MAE = 3.2565 in the testing phase, exhibited the best function in predicting this phenomenon. It can be concluded that hybrid models have accurate results to predict PM2.5 concentration compared with linear model.

  6. A Feasibility Study for Simultaneous Measurements of Water Vapor and Precipitation Parameters using a Three-frequency Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Liao, L.; Tian, L.

    2005-01-01

    The radar return powers from a three-frequency radar, with center frequency at 22.235 GHz and upper and lower frequencies chosen with equal water vapor absorption coefficients, can be used to estimate water vapor density and parameters of the precipitation. A linear combination of differential measurements between the center and lower frequencies on one hand and the upper and lower frequencies on the other provide an estimate of differential water vapor absorption. The coupling between the precipitation and water vapor estimates is generally weak but increases with bandwidth and the amount of non-Rayleigh scattering of the hydrometeors. The coupling leads to biases in the estimates of water vapor absorption that are related primarily to the phase state and the median mass diameter of the hydrometeors. For a down-looking radar, path-averaged estimates of water vapor absorption are possible under rain-free as well as raining conditions by using the surface returns at the three frequencies. Simulations of the water vapor attenuation retrieval show that the largest source of error typically arises from the variance in the measured radar return powers. Although the error can be mitigated by a combination of a high pulse repetition frequency, pulse compression, and averaging in range and time, the radar receiver must be stable over the averaging period. For fractional bandwidths of 20% or less, the potential exists for simultaneous measurements at the three frequencies with a single antenna and transceiver, thereby significantly reducing the cost and mass of the system.

  7. Cross-Layer Design for Space-Time coded MIMO Systems over Rice Fading Channel

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Zhou, Tingting; Liu, Xiaoshuai; Yin, Xin

    A cross-layer design (CLD) scheme for space-time coded MIMO systems over Rice fading channel is presented by combining adaptive modulation and automatic repeat request, and the corresponding system performance is investigated well. The fading gain switching thresholds subject to a target packet error rate (PER) and fixed power constraint are derived. According to these results, and using the generalized Marcum Q-function, the calculation formulae of the average spectrum efficiency (SE) and PER of the system with CLD are derived. As a result, closed-form expressions for average SE and PER are obtained. These expressions include some existing expressions in Rayleigh channel as special cases. With these expressions, the system performance in Rice fading channel is evaluated effectively. Numerical results verify the validity of the theoretical analysis. The results show that the system performance in Rice channel is effectively improved as Rice factor increases, and outperforms that in Rayleigh channel.

  8. Quantifying nonhomogeneous colors in agricultural materials. Part II: comparison of machine vision and sensory panel evaluations.

    PubMed

    Balaban, M O; Aparicio, J; Zotarelli, M; Sims, C

    2008-11-01

    The average colors of mangos and apples were measured using machine vision. A method to quantify the perception of nonhomogeneous colors by sensory panelists was developed. Three colors out of several reference colors and their perceived percentage of the total sample area were selected by untrained panelists. Differences between the average colors perceived by panelists and those from the machine vision were reported as DeltaE values (color difference error). Effects of nonhomogeneity of color, and using real samples or their images in the sensory panels on DeltaE were evaluated. In general, samples with more nonuniform colors had higher DeltaE values, suggesting that panelists had more difficulty in evaluating more nonhomogeneous colors. There was no significant difference in DeltaE values between the real fruits and their screen image, therefore images can be used to evaluate color instead of the real samples.

  9. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials.

    PubMed

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-11-01

    Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.

  10. Channel correlation and BER performance analysis of coherent optical communication systems with receive diversity over moderate-to-strong non-Kolmogorov turbulence.

    PubMed

    Fu, Yulong; Ma, Jing; Tan, Liying; Yu, Siyuan; Lu, Gaoyuan

    2018-04-10

    In this paper, new expressions of the channel-correlation coefficient and its components (the large- and small-scale channel-correlation coefficients) for a plane wave are derived for a horizontal link in moderate-to-strong non-Kolmogorov turbulence using a generalized effective atmospheric spectrum which includes finite-turbulence inner and outer scales and high-wave-number "bump". The closed-form expression of the average bit error rate (BER) of the coherent free-space optical communication system is derived using the derived channel-correlation coefficients and an α-μ distribution to approximate the sum of the square root of arbitrarily correlated Gamma-Gamma random variables. Analytical results are provided to investigate the channel correlation and evaluate the average BER performance. The validity of the proposed approximation is illustrated by Monte Carlo simulations. This work will help with further investigation of the fading correlation in spatial diversity systems.

  11. Monthly mean forecast experiments with the GISS model

    NASA Technical Reports Server (NTRS)

    Spar, J.; Atlas, R. M.; Kuo, E.

    1976-01-01

    The GISS general circulation model was used to compute global monthly mean forecasts for January 1973, 1974, and 1975 from initial conditions on the first day of each month and constant sea surface temperatures. Forecasts were evaluated in terms of global and hemispheric energetics, zonally averaged meridional and vertical profiles, forecast error statistics, and monthly mean synoptic fields. Although it generated a realistic mean meridional structure, the model did not adequately reproduce the observed interannual variations in the large scale monthly mean energetics and zonally averaged circulation. The monthly mean sea level pressure field was not predicted satisfactorily, but annual changes in the Icelandic low were simulated. The impact of temporal sea surface temperature variations on the forecasts was investigated by comparing two parallel forecasts for January 1974, one using climatological ocean temperatures and the other observed daily ocean temperatures. The use of daily updated sea surface temperatures produced no discernible beneficial effect.

  12. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    PubMed

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Multi-model ensemble hydrologic prediction using Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh

    2007-05-01

    Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.

  14. Value stream mapping of the Pap test processing procedure: a lean approach to improve quality and efficiency.

    PubMed

    Michael, Claire W; Naik, Kalyani; McVicker, Michael

    2013-05-01

    We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.

  15. SU-F-J-42: Comparison of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Cranial Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Shi, W; Andrews, D

    2016-06-15

    Purpose: To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac x-ray imaging systems for cranial radiotherapy. Method: Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (Version 2.5), which is integrated with a BrainLab ExacTrac imaging system (Version 6.1.1). The phantom study was based on a Rando head phantom, which was designed to evaluate isocenter-location dependence of the image registrations. Ten isocenters were selected at various locations in the phantom, which represented clinical treatment sites. CBCT and ExacTrac x-ray images were taken when the phantom was located at each isocenter. The patientmore » study included thirteen patients. CBCT and ExacTrac x-ray images were taken at each patient’s treatment position. Six-dimensional image registrations were performed on CBCT and ExacTrac, and residual errors calculated from CBCT and ExacTrac were compared. Results: In the phantom study, the average residual-error differences between CBCT and ExacTrac image registrations were: 0.16±0.10 mm, 0.35±0.20 mm, and 0.21±0.15 mm, in the vertical, longitudinal, and lateral directions, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.36±0.11 degree, 0.14±0.10 degree, and 0.12±0.10 degree, respectively. In the patient study, the average residual-error differences in the vertical, longitudinal, and lateral directions were: 0.13±0.13 mm, 0.37±0.21 mm, 0.22±0.17 mm, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.30±0.10 degree, 0.18±0.11 degree, and 0.22±0.13 degree, respectively. Larger residual-error differences (up to 0.79 mm) were observed in the longitudinal direction in the phantom and patient studies where isocenters were located in or close to frontal lobes, i.e., located superficially. Conclusion: Overall, the average residual-error differences were within 0.4 mm in the translational directions and were within 0.4 degree in the rotational directions.« less

  16. The influence of outliers on results of wet deposition measurements as a function of measurement strategy

    NASA Astrophysics Data System (ADS)

    Slanina, J.; Möls, J. J.; Baard, J. H.

    The results of a wet deposition monitoring experiment, carried out by eight identical wet-only precipitation samplers operating on the basis of 24 h samples, have been used to investigate the accuracy and uncertainties in wet deposition measurements. The experiment was conducted near Lelystad, The Netherlands over the period 1 March 1983-31 December 1985. By rearranging the data for one to eight samplers and sampling periods of 1 day to 1 month both systematic and random errors were investigated as a function of measuring strategy. A Gaussian distribution of the results was observed. Outliers, detected by a Dixon test ( a = 0.05) influenced strongly both the yearly averaged results and the standard deviation of this average as a function of the number of samplers and the length of the sampling period. The systematic bias in bulk elements, using one sampler, varies typically from 2 to 20% and for trace elements from 10 to 500%, respectively. Severe problems are encountered in the case of Zn, Cu, Cr, Ni and especially Cd. For the sensitive detection of trends generally more than one sampler per measuring station is necessary as the standard deviation in the yearly averaged wet deposition is typically 10-20% relative for one sampler. Using three identical samplers, trends of, e.g. 3% per year will be generally detected in 6 years.

  17. An assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue, Ireland

    NASA Astrophysics Data System (ADS)

    Harrington, Seán T.; Harrington, Joseph R.

    2013-03-01

    This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on either river. Historic load estimates (with a 95% confidence interval) were hindcast from the flow record and average annual loads of 7253 ± 673 tonnes on the River Bandon and 1935 ± 325 tonnes on the River Owenabue were estimated to be passing the gauging stations.

  18. Experimental investigation of a general real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring.

    PubMed

    Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J

    2012-11-21

    The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(t(i)) and the projected marker positions p(x(p), y(p); t(i)) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(x(p), y(p); t(i)) - P(θ(i)) · (aR(t(i)) + bR(t(i) - τ) + c)‖(2) with the projection operator P(θ(i)). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.

  19. Experimental investigation of a general real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring

    NASA Astrophysics Data System (ADS)

    Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J.

    2012-11-01

    The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(ti) and the projected marker positions p(xp, yp; ti) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(xp, yp; ti) - P(θi) · (aR(ti) + bR(ti - τ) + c)‖2 with the projection operator P(θi). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.

  20. Optical control of the Advanced Technology Solar Telescope.

    PubMed

    Upton, Robert

    2006-08-10

    The Advanced Technology Solar Telescope (ATST) is an off-axis Gregorian astronomical telescope design. The ATST is expected to be subject to thermal and gravitational effects that result in misalignments of its mirrors and warping of its primary mirror. These effects require active, closed-loop correction to maintain its as-designed diffraction-limited optical performance. The simulation and modeling of the ATST with a closed-loop correction strategy are presented. The correction strategy is derived from the linear mathematical properties of two Jacobian, or influence, matrices that map the ATST rigid-body (RB) misalignments and primary mirror figure errors to wavefront sensor (WFS) measurements. The two Jacobian matrices also quantify the sensitivities of the ATST to RB and primary mirror figure perturbations. The modeled active correction strategy results in a decrease of the rms wavefront error averaged over the field of view (FOV) from 500 to 19 nm, subject to 10 nm rms WFS noise. This result is obtained utilizing nine WFSs distributed in the FOV with a 300 nm rms astigmatism figure error on the primary mirror. Correction of the ATST RB perturbations is demonstrated for an optimum subset of three WFSs with corrections improving the ATST rms wavefront error from 340 to 17.8 nm. In addition to the active correction of the ATST, an analytically robust sensitivity analysis that can be generally extended to a wider class of optical systems is presented.

  1. Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments

    PubMed Central

    Forster, Sarah E.; Cho, Raymond Y.

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900

  2. Image guidance during head-and-neck cancer radiation therapy: analysis of alignment trends with in-room cone-beam computed tomography scans.

    PubMed

    Zumsteg, Zachary; DeMarco, John; Lee, Steve P; Steinberg, Michael L; Lin, Chun Shu; McBride, William; Lin, Kevin; Wang, Pin-Chieh; Kupelian, Patrick; Lee, Percy

    2012-06-01

    On-board cone-beam computed tomography (CBCT) is currently available for alignment of patients with head-and-neck cancer before radiotherapy. However, daily CBCT is time intensive and increases the overall radiation dose. We assessed the feasibility of using the average couch shifts from the first several CBCTs to estimate and correct for the presumed systematic setup error. 56 patients with head-and-neck cancer who received daily CBCT before intensity-modulated radiation therapy had recorded shift values in the medial-lateral, superior-inferior, and anterior-posterior dimensions. The average displacements in each direction were calculated for each patient based on the first five or 10 CBCT shifts and were presumed to represent the systematic setup error. The residual error after this correction was determined by subtracting the calculated shifts from the shifts obtained using daily CBCT. The magnitude of the average daily residual three-dimensional (3D) error was 4.8 ± 1.4 mm, 3.9 ± 1.3 mm, and 3.7 ± 1.1 mm for uncorrected, five CBCT corrected, and 10 CBCT corrected protocols, respectively. With no image guidance, 40.8% of fractions would have been >5 mm off target. Using the first five CBCT shifts to correct subsequent fractions, this percentage decreased to 19.0% of all fractions delivered and decreased the percentage of patients with average daily 3D errors >5 mm from 35.7% to 14.3% vs. no image guidance. Using an average of the first 10 CBCT shifts did not significantly improve this outcome. Using the first five CBCT shift measurements as an estimation of the systematic setup error improves daily setup accuracy for a subset of patients with head-and-neck cancer receiving intensity-modulated radiation therapy and primarily benefited those with large 3D correction vectors (>5 mm). Daily CBCT is still necessary until methods are developed that more accurately determine which patients may benefit from alternative imaging strategies. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

  4. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    NASA Technical Reports Server (NTRS)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are superior in performance compared to other radiosondes, with average 26 km errors of -0.12 hPa or +0.61 percent O3MR error. iMet-P radiosondes had average 26 km errors of -1.95 hPa or +8.75 percent O3MR error. Based on our analysis, we suggest that ozonesondes always be coupled with a GPS-enabled radiosonde and that pressure-dependent variables, such as O3MR, be recalculated-reprocessed using the GPS-measured altitude, especially when 26 km pressure offsets exceed 1.0 hPa 5 percent.

  5. Estimating peak-flow frequency statistics for selected gaged and ungaged sites in naturally flowing streams and rivers in Idaho

    USGS Publications Warehouse

    Wood, Molly S.; Fosness, Ryan L.; Skinner, Kenneth D.; Veilleux, Andrea G.

    2016-06-27

    The U.S. Geological Survey, in cooperation with the Idaho Transportation Department, updated regional regression equations to estimate peak-flow statistics at ungaged sites on Idaho streams using recent streamflow (flow) data and new statistical techniques. Peak-flow statistics with 80-, 67-, 50-, 43-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities (1.25-, 1.50-, 2.00-, 2.33-, 5.00-, 10.0-, 25.0-, 50.0-, 100-, 200-, and 500-year recurrence intervals, respectively) were estimated for 192 streamgages in Idaho and bordering States with at least 10 years of annual peak-flow record through water year 2013. The streamgages were selected from drainage basins with little or no flow diversion or regulation. The peak-flow statistics were estimated by fitting a log-Pearson type III distribution to records of annual peak flows and applying two additional statistical methods: (1) the Expected Moments Algorithm to help describe uncertainty in annual peak flows and to better represent missing and historical record; and (2) the generalized Multiple Grubbs Beck Test to screen out potentially influential low outliers and to better fit the upper end of the peak-flow distribution. Additionally, a new regional skew was estimated for the Pacific Northwest and used to weight at-station skew at most streamgages. The streamgages were grouped into six regions (numbered 1_2, 3, 4, 5, 6_8, and 7, to maintain consistency in region numbering with a previous study), and the estimated peak-flow statistics were related to basin and climatic characteristics to develop regional regression equations using a generalized least squares procedure. Four out of 24 evaluated basin and climatic characteristics were selected for use in the final regional peak-flow regression equations.Overall, the standard error of prediction for the regional peak-flow regression equations ranged from 22 to 132 percent. Among all regions, regression model fit was best for region 4 in west-central Idaho (average standard error of prediction=46.4 percent; pseudo-R2>92 percent) and region 5 in central Idaho (average standard error of prediction=30.3 percent; pseudo-R2>95 percent). Regression model fit was poor for region 7 in southern Idaho (average standard error of prediction=103 percent; pseudo-R2<78 percent) compared to other regions because few streamgages in region 7 met the criteria for inclusion in the study, and the region’s semi-arid climate and associated variability in precipitation patterns causes substantial variability in peak flows.A drainage area ratio-adjustment method, using ratio exponents estimated using generalized least-squares regression, was presented as an alternative to the regional regression equations if peak-flow estimates are desired at an ungaged site that is close to a streamgage selected for inclusion in this study. The alternative drainage area ratio-adjustment method is appropriate for use when the drainage area ratio between the ungaged and gaged sites is between 0.5 and 1.5.The updated regional peak-flow regression equations had lower total error (standard error of prediction) than all regression equations presented in a 1982 study and in four of six regions presented in 2002 and 2003 studies in Idaho. A more extensive streamgage screening process used in the current study resulted in fewer streamgages used in the current study than in the 1982, 2002, and 2003 studies. Fewer streamgages used and the selection of different explanatory variables were likely causes of increased error in some regions compared to previous studies, but overall, regional peak‑flow regression model fit was generally improved for Idaho. The revised statistical procedures and increased streamgage screening applied in the current study most likely resulted in a more accurate representation of natural peak-flow conditions.The updated, regional peak-flow regression equations will be integrated in the U.S. Geological Survey StreamStats program to allow users to estimate basin and climatic characteristics and peak-flow statistics at ungaged locations of interest. StreamStats estimates peak-flow statistics with quantifiable certainty only when used at sites with basin and climatic characteristics within the range of input variables used to develop the regional regression equations. Both the regional regression equations and StreamStats should be used to estimate peak-flow statistics only in naturally flowing, relatively unregulated streams without substantial local influences to flow, such as large seeps, springs, or other groundwater-surface water interactions that are not widespread or characteristic of the respective region.

  6. Automated error correction in IBM quantum computer and explicit generalization

    NASA Astrophysics Data System (ADS)

    Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.

    2018-06-01

    Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.

  7. Improving Global Reanalyses and Short Range Forecast Using TRMM and SSM/I-Derived Precipitation and Moisture Observations

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; deSilva, Arlindo M.

    2000-01-01

    Global reanalyses currently contain significant errors in the primary fields of the hydrological cycle such as precipitation, evaporation, moisture, and the related cloud fields, especially in the tropics. The Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center has been exploring the use of tropical rainfall and total precipitable water (TPW) observations from the TRMM Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments to improve short-range forecast and reanalyses. We describe a "1+1"D procedure for assimilating 6-hr averaged rainfall and TPW in the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The algorithm is based on a 6-hr time integration of a column version of the GEOS DAS, hence the "1+1"D designation. The scheme minimizes the least-square differences between the observed TPW and rain rates and those produced by the column model over the 6-hr analysis window. This 1+lD scheme, in its generalization to four dimensions, is related to the standard 4D variational assimilation but uses analysis increments instead of the initial condition as the control variable. Results show that assimilating the TMI and SSM/I rainfall and TPW observations improves not only the precipitation and moisture fields but also key climate parameters such as clouds, the radiation, the upper-tropospheric moisture, and the large-scale circulation in the tropics. In particular, assimilating these data reduce the state-dependent systematic errors in the assimilated products. The improved analysis also provides better initial conditions for short-range forecasts, but the improvements in forecast are less than improvements in the time-averaged assimilation fields, indicating that using these data types is effective in correcting biases and other errors of the forecast model in data assimilation.

  8. Improving Global Reanalyses and Short-Range Forecast Using TRMM and SSM/I-Derived Precipitation and Moisture Observations

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.

    1999-01-01

    Global reanalyses currently contain significant errors in the primary fields of the hydrological cycle such as precipitation, evaporation, moisture, and the related cloud fields, especially in the tropics. The Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center has been exploring the use of tropical rainfall and total precipitable water (TPW) observations from the TRMM Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments to improve short-range forecast and reanalyses. We describe a 1+1D procedure for assimilating 6-hr averaged rainfall and TPW in the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The algorithm is based on a 6-hr time integration of a column version of the GEOS DAS, hence the 1+1D designation. The scheme minimizes the least-square differences between the observed TPW and rain rates and those produced by the column model over the 6-hr analysis window. This 1+1D scheme, in its generalization to four dimensions, is related to the standard 4D variational assimilation but uses analysis increments instead of the initial condition as the control variable. Results show that assimilating the TMI and SSW rainfall and TPW observations improves not only the precipitation and moisture fields but also key climate parameters such as clouds, the radiation, the upper-tropospheric moisture, and the large-scale circulation in the tropics. In particular, assimilating these data reduce the state-dependent systematic errors in the assimilated products. The improved analysis also provides better initial conditions for short-range forecasts, but the improvements in forecast are less than improvements in the time-averaged assimilation fields, indicating that using these data types is effective in correcting biases and other errors of the forecast model in data assimilation.

  9. Long-term cliff retreat and erosion hotspots along the central shores of the Monterey Bay National Marine Sanctuary

    USGS Publications Warehouse

    Moore, Laura J.; Griggs, Gary B.

    2002-01-01

    Quantification of cliff retreat rates for the southern half of Santa Cruz County, CA, USA, located within the Monterey Bay National Marine Sanctuary, using the softcopy/geographic information system (GIS) methodology results in average cliff retreat rates of 7–15 cm/yr between 1953 and 1994. The coastal dunes at the southern end of Santa Cruz County migrate seaward and landward through time and display net accretion between 1953 and 1994, which is partially due to development. In addition, three critically eroding segments of coastline with high average erosion rates ranging from 20 to 63 cm/yr are identified as erosion ‘hotspots’. These locations include: Opal Cliffs, Depot Hill and Manresa. Although cliff retreat is episodic, spatially variable at the scale of meters, and the factors affecting cliff retreat vary along the Santa Cruz County coastline, there is a compensation between factors affecting retreat such that over the long-term the coastline maintains a relatively smooth configuration. The softcopy/GIS methodology significantly reduces errors inherent in the calculation of retreat rates in high-relief areas (e.g. erosion rates generated in this study are generally correct to within 10 cm) by removing errors due to relief displacement. Although the resulting root mean squared error for erosion rates is relatively small, simple projections of past erosion rates are inadequate to provide predictions of future cliff position. Improved predictions can be made for individual coastal segments by using a mean erosion rate and the standard deviation as guides to future cliff behavior in combination with an understanding of processes acting along the coastal segments in question. This methodology can be applied on any high-relief coast where retreat rates can be measured.

  10. For how long can we predict the weather? - Insights into atmospheric predictability from global convection-allowing simulations

    NASA Astrophysics Data System (ADS)

    Judt, Falko

    2017-04-01

    A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.

  11. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  12. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  13. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  14. The Accuracy of Aggregate Student Growth Percentiles as Indicators of Educator Performance

    ERIC Educational Resources Information Center

    Castellano, Katherine E.; McCaffrey, Daniel F.

    2017-01-01

    Mean or median student growth percentiles (MGPs) are a popular measure of educator performance, but they lack rigorous evaluation. This study investigates the error in MGP due to test score measurement error (ME). Using analytic derivations, we find that errors in the commonly used MGP are correlated with average prior latent achievement: Teachers…

  15. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of this study was to describe fraction-calculation errors among fourth-grade students and to determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low-, average-, or high-achieving). We…

  16. Performance of Physical Examination Skills in Medical Students during Diagnostic Medicine Course in a University Hospital of Northwest China

    PubMed Central

    Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S.; Liu, Zhengwen; Lv, Yi; Shi, Bingyin

    2014-01-01

    This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students. PMID:25329685

  17. Performance of physical examination skills in medical students during diagnostic medicine course in a University Hospital of Northwest China.

    PubMed

    Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S; Liu, Zhengwen; Lv, Yi; Shi, Bingyin

    2014-01-01

    This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students.

  18. Assessing the Library Homepages of COPLAC Institutions for Section 508 Accessibility Errors: Who's Accessible, Who's Not, and How the Online WebXACT Assessment Tool Can Help

    ERIC Educational Resources Information Center

    Huprich, Julia; Green, Ravonne

    2007-01-01

    The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…

  19. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE PAGES

    Brito, Thiago V.; Morley, Steven K.

    2017-10-25

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  20. An Algorithm for Retrieving Land Surface Temperatures Using VIIRS Data in Combination with Multi-Sensors

    PubMed Central

    Xia, Lang; Mao, Kebiao; Ma, Ying; Zhao, Fen; Jiang, Lipeng; Shen, Xinyi; Qin, Zhihao

    2014-01-01

    A practical algorithm was proposed to retrieve land surface temperature (LST) from Visible Infrared Imager Radiometer Suite (VIIRS) data in mid-latitude regions. The key parameter transmittance is generally computed from water vapor content, while water vapor channel is absent in VIIRS data. In order to overcome this shortcoming, the water vapor content was obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) data in this study. The analyses on the estimation errors of vapor content and emissivity indicate that when the water vapor errors are within the range of ±0.5 g/cm2, the mean retrieval error of the present algorithm is 0.634 K; while the land surface emissivity errors range from −0.005 to +0.005, the mean retrieval error is less than 1.0 K. Validation with the standard atmospheric simulation shows the average LST retrieval error for the twenty-three land types is 0.734 K, with a standard deviation value of 0.575 K. The comparison between the ground station LST data indicates the retrieval mean accuracy is −0.395 K, and the standard deviation value is 1.490 K in the regions with vegetation and water cover. Besides, the retrieval results of the test data have also been compared with the results measured by the National Oceanic and Atmospheric Administration (NOAA) VIIRS LST products, and the results indicate that 82.63% of the difference values are within the range of −1 to 1 K, and 17.37% of the difference values are within the range of ±2 to ±1 K. In a conclusion, with the advantages of multi-sensors taken fully exploited, more accurate results can be achieved in the retrieval of land surface temperature. PMID:25397919

  1. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, Thiago V.; Morley, Steven K.

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  2. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  3. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platt, M; Platt, M; Lamba, M

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less

  4. Performance factors of mobile rich media job aids for community health workers

    PubMed Central

    Florez-Arango, Jose F; Dunn, Kim; Zhang, Jiajie

    2011-01-01

    Objective To study and analyze the possible benefits on performance of community health workers using point-of-care clinical guidelines implemented as interactive rich media job aids on small-format mobile platforms. Design A crossover study with one intervention (rich media job aids) and one control (traditional job aids), two periods, with 50 community health workers, each subject solving a total 15 standardized cases per period per period (30 cases in total per subject). Measurements Error rate per case and task, protocol compliance. Results A total of 1394 cases were evaluated. Intervention reduces errors by an average of 33.15% (p=0.001) and increases protocol compliance 30.18% (p<0.001). Limitations Medical cases were presented on human patient simulators in a laboratory setting, not on real patients. Conclusion These results indicate encouraging prospects for mHealth technologies in general, and the use of rich media clinical guidelines on cell phones in particular, for the improvement of community health worker performance in developing countries. PMID:21292702

  5. Objective sea level pressure analysis for sparse data areas

    NASA Technical Reports Server (NTRS)

    Druyan, L. M.

    1972-01-01

    A computer procedure was used to analyze the pressure distribution over the North Pacific Ocean for eleven synoptic times in February, 1967. Independent knowledge of the central pressures of lows is shown to reduce the analysis errors for very sparse data coverage. The application of planned remote sensing of sea-level wind speeds is shown to make a significant contribution to the quality of the analysis especially in the high gradient mid-latitudes and for sparse coverage of conventional observations (such as over Southern Hemisphere oceans). Uniform distribution of the available observations of sea-level pressure and wind velocity yields results far superior to those derived from a random distribution. A generalization of the results indicates that the average lower limit for analysis errors is between 2 and 2.5 mb based on the perfect specification of the magnitude of the sea-level pressure gradient from a known verification analysis. A less than perfect specification will derive from wind-pressure relationships applied to satellite observed wind speeds.

  6. Performance factors of mobile rich media job aids for community health workers.

    PubMed

    Florez-Arango, Jose F; Iyengar, M Sriram; Dunn, Kim; Zhang, Jiajie

    2011-01-01

    To study and analyze the possible benefits on performance of community health workers using point-of-care clinical guidelines implemented as interactive rich media job aids on small-format mobile platforms. A crossover study with one intervention (rich media job aids) and one control (traditional job aids), two periods, with 50 community health workers, each subject solving a total 15 standardized cases per period per period (30 cases in total per subject). Error rate per case and task, protocol compliance. A total of 1394 cases were evaluated. Intervention reduces errors by an average of 33.15% (p = 0.001) and increases protocol compliance 30.18% (p < 0.001). Limitations Medical cases were presented on human patient simulators in a laboratory setting, not on real patients. These results indicate encouraging prospects for mHealth technologies in general, and the use of rich media clinical guidelines on cell phones in particular, for the improvement of community health worker performance in developing countries.

  7. A Modified Double Multiple Nonlinear Regression Constitutive Equation for Modeling and Prediction of High Temperature Flow Behavior of BFe10-1-2 Alloy

    NASA Astrophysics Data System (ADS)

    Cai, Jun; Wang, Kuaishe; Shi, Jiamin; Wang, Wen; Liu, Yingying

    2018-01-01

    Constitutive analysis for hot working of BFe10-1-2 alloy was carried out by using experimental stress-strain data from isothermal hot compression tests, in a wide range of temperature of 1,023 1,273 K, and strain rate range of 0.001 10 s-1. A constitutive equation based on modified double multiple nonlinear regression was proposed considering the independent effects of strain, strain rate, temperature and their interrelation. The predicted flow stress data calculated from the developed equation was compared with the experimental data. Correlation coefficient (R), average absolute relative error (AARE) and relative errors were introduced to verify the validity of the developed constitutive equation. Subsequently, a comparative study was made on the capability of strain-compensated Arrhenius-type constitutive model. The results showed that the developed constitutive equation based on modified double multiple nonlinear regression could predict flow stress of BFe10-1-2 alloy with good correlation and generalization.

  8. Testing the generalized complementary relationship of evaporation with continental-scale long-term water-balance data

    NASA Astrophysics Data System (ADS)

    Szilagyi, Jozsef; Crago, Richard; Qualls, Russell J.

    2016-09-01

    The original and revised versions of the generalized complementary relationship (GCR) of evaporation (ET) were tested with six-digit Hydrologic Unit Code (HUC6) level long-term (1981-2010) water-balance data (sample size of 334). The two versions of the GCR were calibrated with Parameter-Elevation Regressions on Independent Slopes Model (PRISM) mean annual precipitation (P) data and validated against water-balance ET (ETwb) as the difference of mean annual HUC6-averaged P and United States Geological Survey HUC6 runoff (Q) rates. The original GCR overestimates P in about 18% of the PRISM grid points covering the contiguous United States in contrast with 12% of the revised version. With HUC6-averaged data the original version has a bias of -25 mm yr-1 vs the revised version's -17 mm yr-1, and it tends to more significantly underestimate ETwb at high values than the revised one (slope of the best fit line is 0.78 vs 0.91). At the same time it slightly outperforms the revised version in terms of the linear correlation coefficient (0.94 vs 0.93) and the root-mean-square error (90 vs 92 mm yr-1).

  9. Long-term evaluation of orbital dynamics in the Sun-planet system considering axial-tilt

    NASA Astrophysics Data System (ADS)

    Bakhtiari, Majid; Daneshjou, Kamran

    2018-05-01

    In this paper, the axial-tilt (obliquity) effect of planets on the motion of planets’ orbiter in prolonged space missions has been investigated in the presence of the Sun gravity. The proposed model is based on non-simplified perturbed dynamic equations of planetary orbiter motion. From a new point of view, in this work, the dynamic equations regarding a disturbing body in elliptic inclined three-dimensional orbit are derived. The accuracy of this non-simplified method is validated with dual-averaged method employed on a generalized Earth-Moon system. It is shown that the neglected short-time oscillations in dual-averaged technique can accumulate and propel to remarkable errors in the prolonged evolution. After validation, the effects of the planet’s axial-tilt on eccentricity, inclination and right ascension of the ascending node of the orbiter are investigated. Moreover, a generalized model is provided to study the effects of third-body inclination and eccentricity on orbit characteristics. It is shown that the planet’s axial-tilt is the key to facilitating some significant changes in orbital elements in long-term mission and short-time oscillations must be considered in accurate prolonged evaluation.

  10. Cost-effectiveness of the U.S. Geological Survey's stream-gaging programs in Massachusetts and Rhode Island

    USGS Publications Warehouse

    Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.

    1985-01-01

    The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)

  11. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  12. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  13. Cost-effectiveness of the streamflow-gaging program in Wyoming

    USGS Publications Warehouse

    Druse, S.A.; Wahl, K.L.

    1988-01-01

    This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)

  14. Validation of the Kp Geomagnetic Index Forecast at CCMC

    NASA Astrophysics Data System (ADS)

    Frechette, B. P.; Mays, M. L.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.

  15. TRMM On-Orbit Performance Re-Accessed After Control Change

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve

    2006-01-01

    The Tropical Rainfall Measuring Mission (TRMM) spacecraft, a joint mission between the U.S. and Japan, launched onboard an HI1 rocket on November 27,1997 and transitioned in August, 2001 from an average operating altitude of 350 kilometers to 402.5 kilometers. Due to problems using the Earth Sensor Assembly (ESA) at the higher altitude, TRMM switched to a backup attitude control mode. Prior to the orbit boost TRMM controlled pitch and roll to the local vertical using ESA measurements while using gyro data to propagate yaw attitude between yaw updates from the Sun sensors. After the orbit boost, a Kalman filter used 3-axis gyro data with Sun sensor and magnetometers to estimate onboard attitude. While originally intended to meet a degraded attitude accuracy of 0.7 degrees, the new control mode met the original 0.2 degree attitude accuracy requirement after improving onboard ephemeris prediction and adjusting the magnetometer calibration onboard. Independent roll attitude checks using a science instrument, the Precipitation Radar (PR) which was built in Japan, provided a novel insight into the pointing performance. The PR data helped identify the pointing errors after the orbit boost, track the performance improvements, and show subtle effects from ephemeris errors and gyro bias errors. It also helped identify average bias trends throughout the mission. Roll errors tracked by the PR from sample orbits pre-boost and post-boost are shown in Figure 1. Prior to the orbit boost the largest attitude errors were due to occasional interference in the ESA. These errors were sometime larger than 0.2 degrees in pitch and roll, but usually less, as estimated from a comprehensive review of the attitude excursions using gyro data. Sudden jumps in the onboard roll show up as spikes in the reported attitude since the control responds within tens of seconds to null the pointing error. The PR estimated roll tracks well with an estimate of the roll history propagated using gyro data. After the orbit boost, the attitude errors shown by the PR roll have a smooth sine-wave type signal because of the way that attitude errors propagate with the use of gyro data. Yaw errors couple at orbit period to roll with '/4 orbit lag. By tracking the amplitude, phase, and bias of the sinusoidal PR roll error signal, it was shown that the average pitch rotation axis tends to be offset from orbit normal in a direction perpendicular to the Sun direction, as shown in Figure 2 for a 200 day period following the orbit boost. This is a result of the higher accuracy and stability of the Sun sensor measurements relative to the magnetometer measurements used in the Kalman filter. In November, 2001 a magnetometer calibration adjustment was uploaded which improved the pointing performance, keeping the roll and yaw amplitudes within about 0.1 degrees. After the boost, onboard ephemeris errors had a direct effect on the pitch pointing, being used to compute the Earth pointing reference frame. Improvements after the orbit boost have kept the the onboard ephemeris errors generally below 20 kilometers. Ephemeris errors have secondary effects on roll and yaw, especially during high beta angle when pitch effects can couple into roll and yaw. This is illustrated in figure 3. The onboard roll bias trends as measured by PR data show correlations with the Kalman filter's gyro bias error. This particularly shows up after yaw turns (every 2 to 4 weeks) as shown in Figure 3, when a slight roll bias is observed while the onboard computed gyro biases settle to new values. As for longer term trends, the PR data shows that the roll bias was influenced by Earth horizon radiance effects prior to the boost, changing values at yaw turns, and indicated a long term drift as shown in Figure 4. After the boost, the bias variations were smaller and showed some possible correlation with solar beta angle, probably due to sun sensor misalignment effects.

  16. Random errors of oceanic monthly rainfall derived from SSM/I using probability distribution functions

    NASA Technical Reports Server (NTRS)

    Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.

    1993-01-01

    Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.

  17. NOTE: Optimization of megavoltage CT scan registration settings for thoracic cases on helical tomotherapy

    NASA Astrophysics Data System (ADS)

    Woodford, Curtis; Yartsev, Slav; Van Dyk, Jake

    2007-08-01

    This study aims to investigate the settings that provide optimum registration accuracy when registering megavoltage CT (MVCT) studies acquired on tomotherapy with planning kilovoltage CT (kVCT) studies of patients with lung cancer. For each experiment, the systematic difference between the actual and planned positions of the thorax phantom was determined by setting the phantom up at the planning isocenter, generating and registering an MVCT study. The phantom was translated by 5 or 10 mm, MVCT scanned, and registration was performed again. A root-mean-square equation that calculated the residual error of the registration based on the known shift and systematic difference was used to assess the accuracy of the registration process. The phantom study results for 18 combinations of different MVCT/kVCT registration options are presented and compared to clinical registration data from 17 lung cancer patients. MVCT studies acquired with coarse (6 mm), normal (4 mm) and fine (2 mm) slice spacings could all be registered with similar residual errors. No specific combination of resolution and fusion selection technique resulted in a lower residual error. A scan length of 6 cm with any slice spacing registered with the full image fusion selection technique and fine resolution will result in a low residual error most of the time. On average, large corrections made manually by clinicians to the automatic registration values are infrequent. Small manual corrections within the residual error averages of the registration process occur, but their impact on the average patient position is small. Registrations using the full image fusion selection technique and fine resolution of 6 cm MVCT scans with coarse slices have a low residual error, and this strategy can be clinically used for lung cancer patients treated on tomotherapy. Automatic registration values are accurate on average, and a quick verification on a sagittal MVCT slice should be enough to detect registration outliers.

  18. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  19. Executive Council lists and general practitioner files

    PubMed Central

    Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.

    1974-01-01

    An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588

  20. Risk prediction and aversion by anterior cingulate cortex.

    PubMed

    Brown, Joshua W; Braver, Todd S

    2007-12-01

    The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.

  1. Zonal average earth radiation budget measurements from satellites for climate studies

    NASA Technical Reports Server (NTRS)

    Ellis, J. S.; Haar, T. H. V.

    1976-01-01

    Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.

  2. The effect of talker and intonation variability on speech perception in noise in children with dyslexia

    PubMed Central

    Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart

    2013-01-01

    Purpose To determine whether children with dyslexia (DYS) are more affected than age-matched average readers (AR) by talker and intonation variability when perceiving speech in noise. Method Thirty-four DYS and 25 AR children were tested on their perception of consonants in naturally-produced consonant-vowel (CV) tokens in multi-talker babble. Twelve CVs were presented for identification in four conditions varying in the degree of talker and intonation variability. Consonant place (/bi/-/di/) and voicing (/bi/-/pi/) discrimination was investigated with the same conditions. Results DYS children made slightly more identification errors than AR children but only for conditions with variable intonation. Errors were more frequent for a subset of consonants, generally weakly-encoded for AR children, for tokens with intonation patterns (steady and rise-fall) that occur infrequently in connected discourse. In discrimination tasks, which have a greater memory and cognitive load, DYS children scored lower than AR children across all conditions. Conclusions Unusual intonation patterns had a disproportionate (but small) effect on consonant intelligibility in noise for DYS children but adding talker variability did not. DYS children do not appear to have a general problem in perceiving speech in degraded conditions, which makes it unlikely that they lack robust phonological representations. PMID:22761322

  3. Effects of practice schedule and task specificity on the adaptive process of motor learning.

    PubMed

    Barros, João Augusto de Camargo; Tani, Go; Corrêa, Umberto Cesar

    2017-10-01

    This study investigated the effects of practice schedule and task specificity based on the perspective of adaptive process of motor learning. For this purpose, tasks with temporal and force control learning requirements were manipulated in experiments 1 and 2, respectively. Specifically, the task consisted of touching with the dominant hand the three sequential targets with specific movement time or force for each touch. Participants were children (N=120), both boys and girls, with an average age of 11.2years (SD=1.0). The design in both experiments involved four practice groups (constant, random, constant-random, and random-constant) and two phases (stabilisation and adaptation). The dependent variables included measures related to the task goal (accuracy and variability of error of the overall movement and force patterns) and movement pattern (macro- and microstructures). Results revealed a similar error of the overall patterns for all groups in both experiments and that they adapted themselves differently in terms of the macro- and microstructures of movement patterns. The study concludes that the effects of practice schedules on the adaptive process of motor learning were both general and specific to the task. That is, they were general to the task goal performance and specific regarding the movement pattern. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The effect of talker and intonation variability on speech perception in noise in children with dyslexia.

    PubMed

    Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart

    2013-02-01

    In this study, the authors aimed to determine whether children with dyslexia (hereafter referred to as "DYS children") are more affected than children with average reading ability (hereafter referred to as "AR children") by talker and intonation variability when perceiving speech in noise. Thirty-four DYS and 25 AR children were tested on their perception of consonants in naturally produced CV tokens in multitalker babble. Twelve CVs were presented for identification in four conditions varying in the degree of talker and intonation variability. Consonant place (/bi/-/di/) and voicing (/bi/-/pi/) discrimination were investigated with the same conditions. DYS children made slightly more identification errors than AR children but only for conditions with variable intonation. Errors were more frequent for a subset of consonants, generally weakly encoded for AR children, for tokens with intonation patterns (steady and rise-fall) that occur infrequently in connected discourse. In discrimination tasks, which have a greater memory and cognitive load, DYS children scored lower than AR children across all conditions. Unusual intonation patterns had a disproportionate (but small) effect on consonant intelligibility in noise for DYS children, but adding talker variability did not. DYS children do not appear to have a general problem in perceiving speech in degraded conditions, which makes it unlikely that they lack robust phonological representations.

  5. Effect of satellite formations and imaging modes on global albedo estimation

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.

    2016-05-01

    We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.

  6. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    NASA Astrophysics Data System (ADS)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross-sectional averaging and the use of shorter reach lengths) and higher water-surface slopes (reducing the proportional impact of slope errors on discharge calculation).

  7. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  8. Calculating pKa values for substituted phenols and hydration energies for other compounds with the first-order Fuzzy-Border continuum solvation model

    PubMed Central

    Sharma, Ity; Kaminski, George A.

    2012-01-01

    We have computed pKa values for eleven substituted phenol compounds using the continuum Fuzzy-Border (FB) solvation model. Hydration energies for 40 other compounds, including alkanes, alkenes, alkynes, ketones, amines, alcohols, ethers, aromatics, amides, heterocycles, thiols, sulfides and acids have been calculated. The overall average unsigned error in the calculated acidity constant values was equal to 0.41 pH units and the average error in the solvation energies was 0.076 kcal/mol. We have also reproduced pKa values of propanoic and butanoic acids within ca. 0.1 pH units from the experimental values by fitting the solvation parameters for carboxylate ion carbon and oxygen atoms. The FB model combines two distinguishing features. First, it limits the amount of noise which is common in numerical treatment of continuum solvation models by using fixed-position grid points. Second, it employs either second- or first-order approximation for the solvent polarization, depending on a particular implementation. These approximations are similar to those used for solute and explicit solvent fast polarization treatment which we developed previously. This article describes results of employing the first-order technique. This approximation places the presented methodology between the Generalized Born and Poisson-Boltzmann continuum solvation models with respect to their accuracy of reproducing the many-body effects in modeling a continuum solvent. PMID:22815192

  9. Spatial and temporal variability of fine particle composition and source types in five cities of Connecticut and Massachusetts.

    PubMed

    Lee, Hyung Joo; Gent, Janneane F; Leaderer, Brian P; Koutrakis, Petros

    2011-05-01

    To protect public health from PM(2.5) air pollution, it is critical to identify the source types of PM(2.5) mass and chemical components associated with higher risks of adverse health outcomes. Source apportionment modeling using Positive Matrix Factorization (PMF), was used to identify PM(2.5) source types and quantify the source contributions to PM(2.5) in five cities of Connecticut and Massachusetts. Spatial and temporal variability of PM(2.5) mass, components and source contributions were investigated. PMF analysis identified five source types: regional pollution as traced by sulfur, motor vehicle, road dust, oil combustion and sea salt. The sulfur-related regional pollution and traffic source type were major contributors to PM(2.5). Due to sparse ground-level PM(2.5) monitoring sites, current epidemiological studies are susceptible to exposure measurement errors. The higher correlations in concentrations and source contributions between different locations suggest less spatial variability, resulting in less exposure measurement errors. When concentrations and/or contributions were compared to regional averages, correlations were generally higher than between-site correlations. This suggests that for assigning exposures for health effects studies, using regional average concentrations or contributions from several PM(2.5) monitors is more reliable than using data from the nearest central monitor. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. On Using a Space Telescope to Detect Weak-lensing Shear

    NASA Astrophysics Data System (ADS)

    Tung, Nathan; Wright, Edward

    2017-11-01

    Ignoring redshift dependence, the statistical performance of a weak-lensing survey is set by two numbers: the effective shape noise of the sources, which includes the intrinsic ellipticity dispersion and the measurement noise, and the density of sources that are useful for weak-lensing measurements. In this paper, we provide some general guidance for weak-lensing shear measurements from a “generic” space telescope by looking for the optimum wavelength bands to maximize the galaxy flux signal-to-noise ratio (S/N) and minimize ellipticity measurement error. We also calculate an effective galaxy number per square degree across different wavelength bands, taking into account the density of sources that are useful for weak-lensing measurements and the effective shape noise of sources. Galaxy data collected from the ultra-deep UltraVISTA Ks-selected and R-selected photometric catalogs (Muzzin et al. 2013) are fitted to radially symmetric Sérsic galaxy light profiles. The Sérsic galaxy profiles are then stretched to impose an artificial weak-lensing shear, and then convolved with a pure Airy Disk PSF to simulate imaging of weak gravitationally lensed galaxies from a hypothetical diffraction-limited space telescope. For our model calculations and sets of galaxies, our results show that the peak in the average galaxy flux S/N, the minimum average ellipticity measurement error, and the highest effective galaxy number counts all lie around the K-band near 2.2 μm.

  11. Estimating patient-specific soft-tissue properties in a TKA knee.

    PubMed

    Ewing, Joseph A; Kaufman, Michelle K; Hutter, Erin E; Granger, Jeffrey F; Beal, Matthew D; Piazza, Stephen J; Siston, Robert A

    2016-03-01

    Surgical technique is one factor that has been identified as critical to success of total knee arthroplasty. Researchers have shown that computer simulations can aid in determining how decisions in the operating room generally affect post-operative outcomes. However, to use simulations to make clinically relevant predictions about knee forces and motions for a specific total knee patient, patient-specific models are needed. This study introduces a methodology for estimating knee soft-tissue properties of an individual total knee patient. A custom surgical navigation system and stability device were used to measure the force-displacement relationship of the knee. Soft-tissue properties were estimated using a parameter optimization that matched simulated tibiofemoral kinematics with experimental tibiofemoral kinematics. Simulations using optimized ligament properties had an average root mean square error of 3.5° across all tests while simulations using generic ligament properties taken from literature had an average root mean square error of 8.4°. Specimens showed large variability among ligament properties regardless of similarities in prosthetic component alignment and measured knee laxity. These results demonstrate the importance of soft-tissue properties in determining knee stability, and suggest that to make clinically relevant predictions of post-operative knee motions and forces using computer simulations, patient-specific soft-tissue properties are needed. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  12. Error disclosure: a new domain for safety culture assessment.

    PubMed

    Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J

    2012-07-01

    To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.

  13. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  14. Towards Automated Structure-Based NMR Resonance Assignment

    NASA Astrophysics Data System (ADS)

    Jang, Richard; Gao, Xin; Li, Ming

    We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.

  15. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  16. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  17. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  18. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  19. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of the present study was to describe fraction-calculation errors among 4th-grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We…

  20. Alcohol and liver cirrhosis mortality in the United States: comparison of methods for the analyses of time-series panel data models.

    PubMed

    Ye, Yu; Kerr, William C

    2011-01-01

    To explore various model specifications in estimating relationships between liver cirrhosis mortality rates and per capita alcohol consumption in aggregate-level cross-section time-series data. Using a series of liver cirrhosis mortality rates from 1950 to 2002 for 47 U.S. states, the effects of alcohol consumption were estimated from pooled autoregressive integrated moving average (ARIMA) models and 4 types of panel data models: generalized estimating equation, generalized least square, fixed effect, and multilevel models. Various specifications of error term structure under each type of model were also examined. Different approaches controlling for time trends and for using concurrent or accumulated consumption as predictors were also evaluated. When cirrhosis mortality was predicted by total alcohol, highly consistent estimates were found between ARIMA and panel data analyses, with an average overall effect of 0.07 to 0.09. Less consistent estimates were derived using spirits, beer, and wine consumption as predictors. When multiple geographic time series are combined as panel data, none of existent models could accommodate all sources of heterogeneity such that any type of panel model must employ some form of generalization. Different types of panel data models should thus be estimated to examine the robustness of findings. We also suggest cautious interpretation when beverage-specific volumes are used as predictors. Copyright © 2010 by the Research Society on Alcoholism.

  1. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  2. Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida

    USGS Publications Warehouse

    Turner, J.F.

    1979-01-01

    A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)

  3. ERROR IN ANNUAL AVERAGE DUE TO USE OF LESS THAN EVERYDAY MEASUREMENTS

    EPA Science Inventory

    Long term averages of the concentration of PM mass and components are of interest for determining compliance with annual averages, for developing exposure surrogated for cross-sectional epidemiologic studies of the long-term of PM, and for determination of aerosol sources by chem...

  4. August Median Streamflow on Ungaged Streams in Eastern Aroostook County, Maine

    USGS Publications Warehouse

    Lombard, Pamela J.; Tasker, Gary D.; Nielsen, Martha G.

    2003-01-01

    Methods for estimating August median streamflow were developed for ungaged, unregulated streams in the eastern part of Aroostook County, Maine, with drainage areas from 0.38 to 43 square miles and mean basin elevations from 437 to 1,024 feet. Few long-term, continuous-record streamflow-gaging stations with small drainage areas were available from which to develop the equations; therefore, 24 partial-record gaging stations were established in this investigation. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record stations was applied by relating base-flow measurements at these stations to concurrent daily flows at nearby long-term, continuous-record streamflow- gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for varying periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Twenty-three partial-record stations and one continuous-record station were used for the final regression equations. The basin characteristics of drainage area and mean basin elevation are used in the calculated regression equation for ungaged streams to estimate August median flow. The equation has an average standard error of prediction from -38 to 62 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -40 to 67 percent. Model error is larger than sampling error for both equations, indicating that additional basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow, which can be used when making estimates at partial-record or continuous-record gaging stations, range from 0.03 to 11.7 cubic feet per second or from 0.1 to 0.4 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in the eastern part of Aroostook County, within the range of acceptable explanatory variables, range from 0.03 to 30 cubic feet per second or 0.1 to 0.7 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as mean elevation and drainage area increase.

  5. Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia

    USGS Publications Warehouse

    Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.

    2009-01-01

    Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.

  6. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  7. Cost effectiveness of the US Geological Survey's stream-gaging program in New York

    USGS Publications Warehouse

    Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.

    1986-01-01

    The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)

  8. Generalized Variance Function Applications in Forestry

    Treesearch

    James Alegria; Charles T. Scott; Charles T. Scott

    1991-01-01

    Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...

  9. Sources of error in estimating truck traffic from automatic vehicle classification data

    DOT National Transportation Integrated Search

    1998-10-01

    Truck annual average daily traffic estimation errors resulting from sample classification counts are computed in this paper under two scenarios. One scenario investigates an improper factoring procedure that may be used by highway agencies. The study...

  10. Scale Dependence of Statistics of Spatially Averaged Rain Rate Seen in TOGA COARE Comparison with Predictions from a Stochastic Model

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.

  11. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  12. Assessing satellite-derived start-of-season measures in the conterminous USA

    USGS Publications Warehouse

    Schwartz, Mark D.; Reed, Bradley C.; White, Michael A.

    2002-01-01

    National Oceanic and Atmospheric Administration (NOAA)-series satellites, carrying advanced very high-resolution radiometer (AVHRR) sensors, have allowed moderate resolution (1 km) measurements of the normalized difference vegetation index (NDVI) to be collected from the Earth's land surfaces for over 20 years. Across the conterminous USA, a readily accessible and decade-long data set is now available to study many aspects of vegetation activity in this region. One feature, the onset of deciduous plant growth at the start of the spring season (SOS) is of special interest, as it appears to be crucial for accurate computation of several important biospheric processes, and a sensitive measure of the impacts of global change. In this study, satellite-derived SOS dates produced by the delayed moving average (DMA) and seasonal midpoint NDVI (SMN) methods, and modelled surface phenology (spring indices, SI) were compared at widespread deciduous forest and mixed woodland sites during 1990–93 and 1995–99, and these three measures were also matched to native species bud-break data collected at the Harvard Forest (Massachusetts) over the same time period. The results show that both SOS methods are doing a modestly accurate job of tracking the general pattern of surface phenology, but highlight the temporal limitations of biweekly satellite data. Specifically, at deciduous forest sites: (1) SMN SOS dates are close in time to SI first bloom dates (average bias of +0.74 days), whereas DMA SOS dates are considerably earlier (average bias of −41.24 days) and also systematically earlier in late spring than in early spring; (2) SMN SOS tracks overall yearly trends in deciduous forests somewhat better than DMA SOS, but with larger average error (MAEs 8.64 days and 7.37 days respectively); and (3) error in both SOS techniques varies considerably by year. Copyright © 2002 Royal Meteorological Society.

  13. Performance of Optimally Merged Multisatellite Precipitation Products Using the Dynamic Bayesian Model Averaging Scheme Over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua

    2018-01-01

    Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.

  14. Accuracy of the dose-shift approximation in estimating the delivered dose in SBRT of lung tumors considering setup errors and breathing motions.

    PubMed

    Karlsson, Kristin; Lax, Ingmar; Lindbäck, Elias; Poludniowski, Gavin

    2017-09-01

    Geometrical uncertainties can result in a delivered dose to the tumor different from that estimated in the static treatment plan. The purpose of this project was to investigate the accuracy of the dose calculated to the clinical target volume (CTV) with the dose-shift approximation, in stereotactic body radiation therapy (SBRT) of lung tumors considering setup errors and breathing motion. The dose-shift method was compared with a beam-shift method with dose recalculation. Included were 10 patients (10 tumors) selected to represent a variety of SBRT-treated lung tumors in terms of tumor location, CTV volume, and tumor density. An in-house developed toolkit within a treatment planning system allowed the shift of either the dose matrix or a shift of the beam isocenter with dose recalculation, to simulate setup errors and breathing motion. Setup shifts of different magnitudes (up to 10 mm) and directions as well as breathing with different peak-to-peak amplitudes (up to 10:5:5 mm) were modeled. The resulting dose-volume histograms (DVHs) were recorded and dose statistics were extracted. Generally, both the dose-shift and beam-shift methods resulted in calculated doses lower than the static planned dose, although the minimum (D 98% ) dose exceeded the prescribed dose in all cases, for setup shifts up to 5 mm. The dose-shift method also generally underestimated the dose compared with the beam-shift method. For clinically realistic systematic displacements of less than 5 mm, the results demonstrated that in the minimum dose region within the CTV, the dose-shift method was accurate to 2% (root-mean-square error). Breathing motion only marginally degraded the dose distributions. Averaged over the patients and shift directions, the dose-shift approximation was determined to be accurate to approximately 2% (RMS) within the CTV, for clinically relevant geometrical uncertainties for SBRT of lung tumors.

  15. High order cell-centered scheme totally based on cell average

    NASA Astrophysics Data System (ADS)

    Liu, Ze-Yu; Cai, Qing-Dong

    2018-05-01

    This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.

  16. "Reliability generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R)": Correction to Herrington et al. (2016).

    PubMed

    2016-10-01

    Reports an error in "Reliability Generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R)" by Hayley M. Herrington, Timothy B. Smith, Erika Feinauer and Derek Griner ( Journal of Counseling Psychology , Advanced Online Publication, Mar 17, 2016, np). The name of author Erika Feinauer was misspelled as Erika Feinhauer. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-13160-001.) Individuals' strength of ethnic identity has been linked with multiple positive indicators, including academic achievement and overall psychological well-being. The measure researchers use most often to assess ethnic identity, the Multigroup Ethnic Identity Measure (MEIM), underwent substantial revision in 2007. To inform scholars investigating ethnic identity, we performed a reliability generalization analysis on data from the revised version (MEIM-R) and compared it with data from the original MEIM. Random-effects weighted models evaluated internal consistency coefficients (Cronbach's alpha). Reliability coefficients for the MEIM-R averaged α = .88 across 37 samples, a statistically significant increase over the average of α = .84 for the MEIM across 75 studies. Reliability coefficients for the MEIM-R did not differ across study and participant characteristics such as sample gender and ethnic composition. However, consistently lower reliability coefficients averaging α = .81 were found among participants with low levels of education, suggesting that greater attention to data reliability is warranted when evaluating the ethnic identity of individuals such as middle-school students. Future research will be needed to ascertain whether data with other measures of aspects of personal identity (e.g., racial identity, gender identity) also differ as a function of participant level of education and associated cognitive or maturation processes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  18. SU-F-T-320: Assessing Placement Error of Optically Stimulated Luminescent in Vivo Dosimeters Using Cone-Beam Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riegel, A; Klein, E; Tariq, M

    Purpose: Optically-stimulated luminescent dosimeters (OSLDs) are increasingly utilized for in vivo dosimetry of complex radiation delivery techniques such as intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Evaluation of clinical uncertainties such as placement error has not been performed. This work retrospectively investigates the magnitude of placement error using conebeam computed tomography (CBCT) and its effect on measured/planned dose agreement. Methods: Each OSLD was placed at a physicist-designated location on the patient surface on a weekly basis. The location was given in terms of a gantry angle and two-dimensional offset from central axis. The OSLDs were placed before dailymore » image guidance. We identified 77 CBCTs from 25 head-and-neck patients who received IMRT or VMAT, where OSLDs were visible on the CT image. Grossly misplaced OSLDs were excluded (e.g. wrong laterality). CBCTs were registered with the treatment plan and the distance between the planned and actual OSLD location was calculated in two dimensions in the beam’s eye view. Distances were correlated with measured/planned dose percent differences. Results: OSLDs were grossly misplaced for 5 CBCTs (6.4%). For the remaining 72 CBCTs, average placement error was 7.0±6.0 mm. These errors were not correlated with measured/planned dose percent differences (R{sup 2}=0.0153). Generalizing the dosimetric effect of placement errors may be unreliable. Conclusion: Correct placement of OSLDs for IMRT and VMAT treatments is critical to accurate and precise in vivo dosimetry. Small placement errors could produce large disagreement between measured and planned dose. Further work includes expansion to other treatment sites, examination of planned dose at the actual point of OSLD placement, and the influence of imageguided shifts on measured/planned dose agreement.« less

  19. A radio-frequency sheath model for complex waveforms

    NASA Astrophysics Data System (ADS)

    Turner, M. M.; Chabert, P.

    2014-04-01

    Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.

  20. Introduction to a system for implementing neural net connections on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1988-01-01

    Neural networks have attracted much interest recently, and using parallel architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized communication. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm permitting the formation of arbitrary connections between the neurons. A feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the product of the average number of connections per neuron and the diameter of the interconnection network.

  1. Introduction to a system for implementing neural net connections on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1988-01-01

    Neural networks have attracted much interest recently, and using parallel architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized elements. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm permitting the formation of arbitrary connections between the neurons. A feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the product of the average number of connections per neuron and the diameter of the interconnection network.

  2. A general mixture theory. I. Mixtures of spherical molecules

    NASA Astrophysics Data System (ADS)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  3. Design and implementation of a telemedicine system using Bluetooth protocol and GSM/GPRS network, for real time remote patient monitoring.

    PubMed

    Jasemian, Yousef; Nielsen, Lars Arendt

    2005-01-01

    This paper introduces the design and implementation of a generic wireless and Real-time Multi-purpose Health Care Telemedicine system applying Bluetooth protocol, Global System for Mobile Communications (GSM) and General Packet Radio Service (GPRS). The paper explores the factors that should be considered when evaluating different technologies for application in telemedicine system. The design and implementation of an embedded wireless communication platform utilising Bluetooth protocol is described, and the implementation problems and limitations are investigated. The system is tested and its telecommunication general aspects are verified. The results showed that the system has (97.9 +/- 1.3)% Up-time, 2.5 x 10(-5) Bit Error Rate, 1% Dropped Call Rate, 97.4% Call Success Rate, 5 second transmission delay in average, (3.42 +/- 0.11) kbps throughput, and the system may have application in electrocardiography.

  4. Intuitive theories of information: beliefs about the value of redundancy.

    PubMed

    Soll, J B

    1999-03-01

    In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.

  5. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  6. Counteracting estimation bias and social influence to improve the wisdom of crowds.

    PubMed

    Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D

    2018-04-01

    Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).

  7. Comparison of Online 6 Degree-of-Freedom Image Registration of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Intracranial Radiosurgery.

    PubMed

    Li, Jun; Shi, Wenyin; Andrews, David; Werner-Wasik, Maria; Lu, Bo; Yu, Yan; Dicker, Adam; Liu, Haisong

    2017-06-01

    The study was aimed to compare online 6 degree-of-freedom image registrations of TrueBeam cone-beam computed tomography and BrainLab ExacTrac X-ray imaging systems for intracranial radiosurgery. Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (version 2.5), which is integrated with a BrainLab ExacTrac imaging system (version 6.1.1). The phantom study was based on a Rando head phantom and was designed to evaluate isocenter location dependence of the image registrations. Ten isocenters at various locations representing clinical treatment sites were selected in the phantom. Cone-beam computed tomography and ExacTrac X-ray images were taken when the phantom was located at each isocenter. The patient study included 34 patients. Cone-beam computed tomography and ExacTrac X-ray images were taken at each patient's treatment position. The 6 degree-of-freedom image registrations were performed on cone-beam computed tomography and ExacTrac, and residual errors calculated from cone-beam computed tomography and ExacTrac were compared. In the phantom study, the average residual error differences (absolute values) between cone-beam computed tomography and ExacTrac image registrations were 0.17 ± 0.11 mm, 0.36 ± 0.20 mm, and 0.25 ± 0.11 mm in the vertical, longitudinal, and lateral directions, respectively. The average residual error differences in the rotation, roll, and pitch were 0.34° ± 0.08°, 0.13° ± 0.09°, and 0.12° ± 0.10°, respectively. In the patient study, the average residual error differences in the vertical, longitudinal, and lateral directions were 0.20 ± 0.16 mm, 0.30 ± 0.18 mm, 0.21 ± 0.18 mm, respectively. The average residual error differences in the rotation, roll, and pitch were 0.40°± 0.16°, 0.17° ± 0.13°, and 0.20° ± 0.14°, respectively. Overall, the average residual error differences were <0.4 mm in the translational directions and <0.5° in the rotational directions. ExacTrac X-ray image registration is comparable to TrueBeam cone-beam computed tomography image registration in intracranial treatments.

  8. Joint optimization of a partially coherent Gaussian beam for free-space optical communication over turbulent channels with pointing errors.

    PubMed

    Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali

    2013-02-01

    Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.

  9. Animal social networks as substrate for cultural behavioural diversity.

    PubMed

    Whitehead, Hal; Lusseau, David

    2012-02-07

    We used individual-based stochastic models to examine how social structure influences the diversity of socially learned behaviour within a non-human population. For continuous behavioural variables we modelled three forms of dyadic social learning, averaging the behavioural value of the two individuals, random transfer of information from one individual to the other, and directional transfer from the individual with highest behavioural value to the other. Learning had potential error. We also examined the transfer of categorical behaviour between individuals with random directionality and two forms of error, the adoption of a randomly chosen existing behavioural category or the innovation of a new type of behaviour. In populations without social structuring the diversity of culturally transmitted behaviour increased with learning error and population size. When the populations were structured socially either by making individuals members of permanent social units or by giving them overlapping ranges, behavioural diversity increased with network modularity under all scenarios, although the proportional increase varied considerably between continuous and categorical behaviour, with transmission mechanism, and population size. Although functions of the form e(c)¹(m)⁻(c)² + (c)³(Log(N)) predicted the mean increase in diversity with modularity (m) and population size (N), behavioural diversity could be highly unpredictable both between simulations with the same set of parameters, and within runs. Errors in social learning and social structuring generally promote behavioural diversity. Consequently, social learning may be considered to produce culture in populations whose social structure is sufficiently modular. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Feedback-tuned, noise resilient gates for encoded spin qubits

    NASA Astrophysics Data System (ADS)

    Bluhm, Hendrik

    Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.

  11. Colorimetric Characterization of Mobile Devices for Vision Applications.

    PubMed

    de Fez, Dolores; Luque, Maria José; García-Domene, Maria Carmen; Camps, Vicente; Piñero, David

    2016-01-01

    Available applications for vision testing in mobile devices usually do not include detailed setup instructions, sacrificing rigor to obtain portability and ease of use. In particular, colorimetric characterization processes are generally obviated. We show that different mobile devices differ also in colorimetric profile and that those differences limit the range of applications for which they are most adequate. The color reproduction characteristics of four mobile devices, two smartphones (Samsung Galaxy S4, iPhone 4s) and two tablets (Samsung Galaxy Tab 3, iPad 4), have been evaluated using two procedures: 3D LUT (Look Up Table) and a linear model assuming primary constancy and independence of the channels. The color reproduction errors have been computed with the CIEDE2000 color difference formula. There is good constancy of primaries but large deviations of additivity. The 3D LUT characterization yields smaller reproduction errors and dispersions for the Tab 3 and iPhone 4 devices, but for the iPad 4 and S4, both models are equally good. The smallest reproduction errors occur with both Apple devices, although the iPad 4 has the highest number of outliers of all devices with both colorimetric characterizations. Even though there is good constancy of primaries, the large deviations of additivity exhibited by the devices and the larger reproduction errors make any characterization based on channel independence not recommendable. The smartphone screens show, in average, the best color reproduction performance, particularly the iPhone 4, and therefore, they are more adequate for applications requiring precise color reproduction.

  12. Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont

    USGS Publications Warehouse

    Smath, J.A.; Blackey, F.E.

    1986-01-01

    Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)

  13. Cost-effectiveness of the Federal stream-gaging program in Virginia

    USGS Publications Warehouse

    Carpenter, D.H.

    1985-01-01

    Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  14. Impacts of river segmentation strategies on reach-averaged product uncertainties for the upcoming Surface Water and Ocean Topography (SWOT) mission

    NASA Astrophysics Data System (ADS)

    Frasson, R. P. M.; Wei, R.; Minear, J. T.; Tuozzolo, S.; Domeneghetti, A.; Durand, M. T.

    2016-12-01

    Averaging is a powerful method to reduce measurement noise associated with remote sensing observation of water surfaces. However, when dealing with river measurements, the choice of which points are averaged may affect the quality of the products. We examine the effectiveness of three fully automated reach definition strategies: In the first, we break up reaches at regular intervals measured along the rivers' centerlines. The second strategy consists of identifying hydraulic controls by searching for inflection points on water surface profiles. The third strategy takes into consideration river planform features, breaking up reaches according to channel sinuosity. We employed the Jet Propulsion Laboratory's (JPL) SWOT hydrology simulator to generate 9 synthetic SWOT observations of the Sacramento River in California, USA and 14 overpasses of the Po River in northern Italy. In order to create the synthetic SWOT data, the simulator requires the true water digital elevation model (DEM), which we constructed from hydraulic models of both rivers, and the terrain DEM, which we built from LiDAR data of both basins. We processed the simulated pixel clouds using the JPL's RiverObs package, which traces the river centerline and estimates water surface height and river width on equally spaced nodes located along the centerline. Subsequently, we applied the three reach definition methodologies to the nodes and to the hydraulic models' outputs to generate simulated reach-averaged observations and the reach-averaged truth respectively. Our results generally indicate that height, width, slope, and discharge errors decrease with increasing reach length, with most of the accuracy gains occurring when reach length increases to up to 15 km for both the narrow (Sacramento) and the wide (Po) rivers. The "smart" methods led to smaller slope, width, and discharge errors for the Sacramento River when compared to arbitrary reaches of similar length whereas, for the for the Po River all methods had comparable performance. Our results suggest that river segmentation strategies that take into consideration the hydraulic characteristics of rivers may lead to more meaningful reach boundaries and to better products especially for narrower and more complex rivers.

  15. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  16. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes

    NASA Astrophysics Data System (ADS)

    Marvian, Milad; Lidar, Daniel A.

    2017-01-01

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  17. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.

    PubMed

    Marvian, Milad; Lidar, Daniel A

    2017-01-20

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  18. Robust Tomography using Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Silva, Marcus; Kimmel, Shelby; Johnson, Blake; Ryan, Colm; Ohki, Thomas

    2013-03-01

    Conventional randomized benchmarking (RB) can be used to estimate the fidelity of Clifford operations in a manner that is robust against preparation and measurement errors -- thus allowing for a more accurate and relevant characterization of the average error in Clifford gates compared to standard tomography protocols. Interleaved RB (IRB) extends this result to the extraction of error rates for individual Clifford gates. In this talk we will show how to combine multiple IRB experiments to extract all information about the unital part of any trace preserving quantum process. Consequently, one can compute the average fidelity to any unitary, not just the Clifford group, with tighter bounds than IRB. Moreover, the additional information can be used to design improvements in control. MS, BJ, CR and TO acknowledge support from IARPA under contract W911NF-10-1-0324.

  19. Predictability of Solar Radiation for Photovoltaics systems over Europe: from short-term to seasonal time-scales

    NASA Astrophysics Data System (ADS)

    De Felice, Matteo; Petitta, Marcello; Ruti, Paolo

    2014-05-01

    Photovoltaic diffusion is steadily growing on Europe, passing from a capacity of almost 14 GWp in 2011 to 21.5 GWp in 2012 [1]. Having accurate forecast is needed for planning and operational purposes, with the possibility to model and predict solar variability at different time-scales. This study examines the predictability of daily surface solar radiation comparing ECMWF operational forecasts with CM-SAF satellite measurements on the Meteosat (MSG) full disk domain. Operational forecasts used are the IFS system up to 10 days and the System4 seasonal forecast up to three months. Forecast are analysed considering average and variance of errors, showing error maps and average on specific domains with respect to prediction lead times. In all the cases, forecasts are compared with predictions obtained using persistence and state-of-art time-series models. We can observe a wide range of errors, with the performance of forecasts dramatically affected by orography and season. Lower errors are on southern Italy and Spain, with errors on some areas consistently under 10% up to ten days during summer (JJA). Finally, we conclude the study with some insight on how to "translate" the error on solar radiation to error on solar power production using available production data from solar power plants. [1] EurObserver, "Baromètre Photovoltaïque, Le journal des énergies renouvables, April 2012."

  20. Quantifying methane and nitrous oxide emissions from the UK using a dense monitoring network

    NASA Astrophysics Data System (ADS)

    Ganesan, A. L.; Manning, A. J.; Grant, A.; Young, D.; Oram, D. E.; Sturges, W. T.; Moncrieff, J. B.; O'Doherty, S.

    2015-01-01

    The UK is one of several countries around the world that has enacted legislation to reduce its greenhouse gas emissions. Monitoring of emissions has been done through a detailed sectoral level bottom-up inventory (UK National Atmospheric Emissions Inventory, NAEI) from which national totals are submitted yearly to the United Framework Convention on Climate Change. In parallel, the UK government has funded four atmospheric monitoring stations to infer emissions through top-down methods that assimilate atmospheric observations. In this study, we present top-down emissions of methane (CH4) and nitrous oxide (N2O) for the UK and Ireland over the period August 2012 to August 2014. We used a hierarchical Bayesian inverse framework to infer fluxes as well as a set of covariance parameters that describe uncertainties in the system. We inferred average UK emissions of 2.08 (1.72-2.47) Tg yr-1 CH4 and 0.105 (0.087-0.127) Tg yr-1 N2O and found our derived estimates to be generally lower than the inventory. We used sectoral distributions from the NAEI to determine whether these discrepancies can be attributed to specific source sectors. Because of the distinct distributions of the two dominant CH4 emissions sectors in the UK, agriculture and waste, we found that the inventory may be overestimated in agricultural CH4 emissions. We also found that N2O fertilizer emissions from the NAEI may be overestimated and we derived a significant seasonal cycle in emissions. This seasonality is likely due to seasonality in fertilizer application and in environmental drivers such as temperature and rainfall, which are not reflected in the annual resolution inventory. Through the hierarchical Bayesian inverse framework, we quantified uncertainty covariance parameters and emphasized their importance for high-resolution emissions estimation. We inferred average model errors of approximately 20 and 0.4 ppb and correlation timescales of 1.0 (0.72-1.43) and 2.6 (1.9-3.9) days for CH4 and N2O, respectively. These errors are a combination of transport model errors as well as errors due to unresolved emissions processes in the inventory. We found the largest CH4 errors at the Tacolneston station in eastern England, which is possibly to do with sporadic emissions from landfills and offshore gas in the North Sea.

  1. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  2. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  3. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less

  4. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  5. Multi-year objective analyses of warm season ground-level ozone and PM2.5 over North America using real-time observations and Canadian operational air quality models

    NASA Astrophysics Data System (ADS)

    Robichaud, A.; Ménard, R.

    2013-05-01

    We present multi-year objective analyses (OA) on a high spatio-temporal resolution (15 or 21 km, every hour) for the warm season period (1 May-31 October) for ground-level ozone (2002-2012) and for fine particulate matter (diameter less than 2.5 microns (PM2.5)) (2004-2012). The OA used here combines the Canadian Air Quality forecast suite with US and Canadian surface air quality monitoring sites. The analysis is based on an optimal interpolation with capabilities for adaptive error statistics for ozone and PM2.5 and an explicit bias correction scheme for the PM2.5 analyses. The estimation of error statistics has been computed using a modified version of the Hollingsworth-Lönnberg's (H-L) method. Various quality controls (gross error check, sudden jump test and background check) have been applied to the observations to remove outliers. An additional quality control is applied to check the consistency of the error statistics estimation model at each observing station and for each hour. The error statistics are further tuned "on the fly" using a χ2 (chi-square) diagnostic, a procedure which verifies significantly better than without tuning. Successful cross-validation experiments were performed with an OA set-up using 90% of observations to build the objective analysis and with the remainder left out as an independent set of data for verification purposes. Furthermore, comparisons with other external sources of information (global models and PM2.5 satellite surface derived measurements) show reasonable agreement. The multi-year analyses obtained provide relatively high precision with an absolute yearly averaged systematic error of less than 0.6 ppbv (parts per billion by volume) and 0.7 μg m-3 (micrograms per cubic meter) for ozone and PM2.5 respectively and a random error generally less than 9 ppbv for ozone and under 12 μg m-3 for PM2.5. In this paper, we focus on two applications: (1) presenting long term averages of objective analysis and analysis increments as a form of summer climatology and (2) analyzing long term (decadal) trends and inter-annual fluctuations using OA outputs. Our results show that high percentiles of ozone and PM2.5 are both following a decreasing trend overall in North America with the eastern part of United States (US) presenting the highest decrease likely due to more effective pollution controls. Some locations, however, exhibited an increasing trend in the mean ozone and PM2.5 such as the northwestern part of North America (northwest US and Alberta). The low percentiles are generally rising for ozone which may be linked to increasing emissions from emerging countries and the resulting pollution brought by the intercontinental transport. After removing the decadal trend, we demonstrate that the inter-annual fluctuations of the high percentiles are significantly correlated with temperature fluctuations for ozone and precipitation fluctuations for PM2.5. We also show that there was a moderately significant correlation between the inter-annual fluctuations of the high percentiles of ozone and PM2.5 with economic indices such as the Industrial Dow Jones and/or the US gross domestic product growth rate.

  6. Triple collocation based merging of satellite soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...

  7. Feasibility of Coherent and Incoherent Backscatter Experiments from the AMPS Laboratory. Technical Section

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.

  8. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  9. Cognitive control adjustments in healthy older and younger adults: Conflict adaptation, the error-related negativity (ERN), and evidence of generalized decline with age.

    PubMed

    Larson, Michael J; Clayson, Peter E; Keith, Cierra M; Hunt, Isaac J; Hedges, Dawson W; Nielsen, Brent L; Call, Vaughn R A

    2016-03-01

    Older adults display alterations in neural reflections of conflict-related processing. We examined response times (RTs), error rates, and event-related potential (ERP; N2 and P3 components) indices of conflict adaptation (i.e., congruency sequence effects) a cognitive control process wherein previous-trial congruency influences current-trial performance, along with post-error slowing, correct-related negativity (CRN), error-related negativity (ERN) and error positivity (Pe) amplitudes in 65 healthy older adults and 94 healthy younger adults. Older adults showed generalized slowing, had decreased post-error slowing, and committed more errors than younger adults. Both older and younger adults showed conflict adaptation effects; magnitude of conflict adaptation did not differ by age. N2 amplitudes were similar between groups; younger, but not older, adults showed conflict adaptation effects for P3 component amplitudes. CRN and Pe, but not ERN, amplitudes differed between groups. Data support generalized declines in cognitive control processes in older adults without specific deficits in conflict adaptation. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Some cable suspension systems and their effects on the flexural frequencies of slender aerospace structures

    NASA Technical Reports Server (NTRS)

    Herr, R. W.

    1974-01-01

    The effects of several cable suspension configurations on the first free-free flexural frequency of uniform beams have been determined by experiment and analysis. The results of this study confirm that in general the larger the test vehicle the larger is the flexural frequency measurement error attributable to a given cable suspension configuration. For horizontally oriented beams representing modern aerospace vehicles of average size and flexibility, the restraining effects of all but the shortest support cables were minor. The restraining effects of support cables of moderate length attached near the base of vertically oriented vehicles were overshadowed by the effects of beam compression due to gravity.

  11. Recursive least-squares learning algorithms for neural networks

    NASA Astrophysics Data System (ADS)

    Lewis, Paul S.; Hwang, Jenq N.

    1990-11-01

    This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].

  12. Regression modeling of gas-particle partitioning of atmospheric oxidized mercury from temperature data

    NASA Astrophysics Data System (ADS)

    Cheng, Irene; Zhang, Leiming; Blanchard, Pierrette

    2014-10-01

    Models describing the partitioning of atmospheric oxidized mercury (Hg(II)) between the gas and fine particulate phases were developed as a function of temperature. The models were derived from regression analysis of the gas-particle partitioning parameters, defined by a partition coefficient (Kp) and Hg(II) fraction in fine particles (fPBM) and temperature data from 10 North American sites. The generalized model, log(1/Kp) = 12.69-3485.30(1/T) (R2 = 0.55; root-mean-square error (RMSE) of 1.06 m3/µg for Kp), predicted the observed average Kp at 7 of the 10 sites. Discrepancies between the predicted and observed average Kp were found at the sites impacted by large Hg sources because the model had not accounted for the different mercury speciation profile and aerosol compositions of different sources. Site-specific equations were also generated from average Kp and fPBM corresponding to temperature interval data. The site-specific models were more accurate than the generalized Kp model at predicting the observations at 9 of the 10 sites as indicated by RMSE of 0.22-0.5 m3/µg for Kp and 0.03-0.08 for fPBM. Both models reproduced the observed monthly average values, except for a peak in Hg(II) partitioning observed during summer at two locations. Weak correlations between the site-specific model Kp or fPBM and observations suggest the role of aerosol composition, aerosol water content, and relative humidity factors on Hg(II) partitioning. The use of local temperature data to parameterize Hg(II) partitioning in the proposed models potentially improves the estimation of mercury cycling in chemical transport models and elsewhere.

  13. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  14. [A study of refractive errors in a primary school in Cotonou, Benin].

    PubMed

    Sounouvou, I; Tchabi, S; Doutetien, C; Sonon, F; Yehouessi, L; Bassabi, S K

    2008-10-01

    Determine the epidemiologic aspects and the degree of severity of different refractive errors in primary schoolchildren. A prospective and descriptive study was conducted from 1 December 2005 to 31 March 2006 on schoolchildren ranging from 4 to 16 years of age in a public primary school in Cotonou, Benin. The refraction was evaluated for any visual acuity lower than or equal to 0.7. The study included 1057 schoolchildren. The average age of the study population was 8.5+/-2.6 years with a slight predominance of females (51.8%). The prevalence of refractive error was 10.6% and astigmatism accounted for the most frequent refractive anomaly (91.9%). Myopia and the hyperopia were associated with astigmatism in 29.4% and 16.1% of the cases, respectively. The age bracket from 6 to 11 years accounted for the majority of refractive errors (75.9%), without age and sex being risk factors (p=0.811 and p=0.321, respectively). The average vision of the ametropic eye was 0.61, with a clear predominance of slight refractive errors (89.3%) and particularly of low-level simple astigmatism (45.5%). The relatively low prevalence of refractive error observed does not obviate the need for implementing actions to improve the ocular health of schoolchildren.

  15. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  16. Measuring Scale Errors in a Laser Tracker’s Horizontal Angle Encoder Through Simple Length Measurement and Two-Face System Tests

    PubMed Central

    Muralikrishnan, B.; Blackburn, C.; Sawyer, D.; Phillips, S.; Bridges, R.

    2010-01-01

    We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder’s error map to improve the tracker’s angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error. PMID:27134789

  17. A retrospective review of medical errors adjudicated in court between 2002 and 2012 in Spain.

    PubMed

    Giraldo, Priscila; Sato, Luke; Sala, María; Comas, Merce; Dywer, Kathy; Castells, Xavier

    2016-02-01

    This paper describes verdicts in court involving injury-producing medical errors in Spain. A descriptive analysis of 1041 closed court verdicts from Spain between January 2002 and December 2012. It was determined whether a medical error had occurred, and among those with medical error (n = 270), characteristics and results of litigation were analyzed. Data on litigation were obtained from the Thomson Reuters Aranzadi Westlaw databases. All verdicts involving health system were reviewed and classified according to the presence of medical error. Among those, contributory factors, medical specialty involved, health impact (death, disability and severity) and results of litigation (resolution, time to verdict and economic compensations) were described. Medical errors were involved in 25.9% of court verdicts. The cause of medical error was a diagnosis-related problem in 25.1% and surgical treatment in 22.2%, and Obstetrics-Gynecology was the most frequent involved specialty (21%). Most of them were of high severity (59.4%), one-third (32%) caused death. The average time interval between the occurrence of the error and the verdict was 7.8 years. The average indemnity payment was €239 505.24; the highest was psychiatry (€7 585 075.86) and the lowest was Emergency Medicine (€69 871.19). This study indicates that in Spain medical errors are common among verdicts involving the health system, most of them causing high-severity adverse outcomes. The interval between the medical error and the verdict is excessive, and there is a wide range of economic compensation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  18. Prevalence of myopia in a group of Hong Kong microscopists.

    PubMed

    Ting, Patrick W K; Lam, Carly S Y; Edwards, Marion H; Schmid, Katrina L

    2004-02-01

    To study the prevalence and magnitude of myopia in a group of Hong Kong Chinese microscopists and compare it with that observed in microscopists working in the United Kingdom. Forty-seven microscopists (36 women and 11 men) with a median age of 31 years and working in hospital laboratories throughout Hong Kong were recruited to the study. Information about past refractive corrections, microscopy work, and visual symptoms associated with microscope use were collected. All subjects had a comprehensive eye examination at The Hong Kong Polytechnic University Optometry Clinic, including measures of refractive error (both noncycloplegic and cycloplegic), binocular vision functions, and axial length. The prevalence of myopia in this group of microscopists was 87%, the mean (+/- SD) refractive error was -4.45 +/- 3.03 D and mean axial length was 25.13 +/- 1.52 mm. No correlation was found between refractive error and years spent working as a microscopist or number of hours per day spent performing microscopy. Subjects reporting myopia progression (N = 22) did not differ from the refractively stable group (N = 19) in terms of their microscopy working history, working hours, tonic accommodation level, or near phoria. However, the AC/A ratio of the progressing group was significantly greater than that of the stable group (4.59 delta/D cf. 3.34 delta/D). The myopia prevalence of Hong Kong Chinese microscopists was higher than that of microscopists in the United Kingdom (87% cf. 71%), as well as the Hong Kong general population (87% cf. 70%). The average amount of myopia was also higher in the Hong Kong Chinese microscopists than the Hong Kong general population (-4.45 D cf. -3.00 D). We have confirmed that the microscopy task may slightly exacerbate myopia development in Chinese people.

  19. Microseismicity Studies in Northern Baja California: General Results.

    NASA Astrophysics Data System (ADS)

    Frez, J.; Acosta, J.; Gonzalez, J.; Nava, F.; Suarez, F.

    2005-12-01

    Between 1997 and 2003, we installed local seismological networks in northern Baja California with digital, three-component, Reftek instruments, and with 100-125 Hz sampling. Each local network had from 15 to 40 stations over an area approximately of 50 x 50 km2. Surveys have been carried out for the Mexicali seismic zone and the Ojos Negros region (1997), the San Miguel fault system (1998), the Pacific coast between Tijuana and Ensenada (1999), the Agua Blanca and Vallecito fault systems (2001), the Sierra Juarez fault system (2002), and other smaller areas (2001 and 2003). These detailed microseismicity surveys are complemented with seismograms and arrival times from regional networks (RESNOM and SCSN). Selected locations presented here have errors (formal errors from HYPO71) less than 1 km. Phase reading errors are estimated at less than or about 0.03 s. Most of the activity is located between mapped fault traces, along alignments which do not follow the fault traces, and where tectonic alignments intersect. The results suggests an orthogonal pattern at various scales. Depth distributions generally have two maxima, one secondary maximum, at about 5 km; the other, located at 12-17 km. The Agua Blanca fault is essentially inactive for earthquakes with ML > 1.7. Most focal mechanisms are strike-slip with a minor normal component; the others are dominantly normal; the resulting pattern indicates a regional extensional regime for all the regions with an average NS azimuth for the P-axes. Fracture directions, obtained from directivity measurements, show orthogonal directions, one of which approximately coincides with the azimuth of mapped fault traces. These results indicate that the Pacific-North American interplate motion is not being entirely accommodated by the NW trending faults, but rather is creating a complex system of conjugate faults.

  20. After the Medication Error: Recent Nursing Graduates' Reflections on Adequacy of Education.

    PubMed

    Treiber, Linda A; Jones, Jackie H

    2018-05-01

    The purpose of this study was to better understand individual- and system-level factors surrounding making a medication error from the perspective of recent Bachelor of Science in Nursing graduates. Online survey mixed-methods items included perceptions of adequacy of preparatory nursing education, contributory variables, emotional responses, and treatment by employer following the error. Of the 168 respondents, 55% had made a medication error. Errors resulted from inexperience, rushing, technology, staffing, and patient acuity. Twenty-four percent did not report their errors. Key themes for improving education included more practice in varied clinical areas, intensive pharmacological preparation, practical instruction in functioning within the health care environment, and coping after making medication errors. Errors generally caused emotional distress in the error maker. Overall, perceived treatment after the error reflected supportive environments, where nurses were generally treated with respect, fair treatment, and understanding. Opportunities for nursing education include second victim awareness and reinforcing professional practice standards. [J Nurs Educ. 2018;57(5):275-280.]. Copyright 2018, SLACK Incorporated.

  1. SU-E-T-192: FMEA Severity Scores - Do We Really Know?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonigan, J; Johnson, J; Kry, S

    2014-06-01

    Purpose: Failure modes and effects analysis (FMEA) is a subjective risk mitigation technique that has not been applied to physics-specific quality management practices. There is a need for quantitative FMEA data as called for in the literature. This work focuses specifically on quantifying FMEA severity scores for physics components of IMRT delivery and comparing to subjective scores. Methods: Eleven physical failure modes (FMs) for head and neck IMRT dose calculation and delivery are examined near commonly accepted tolerance criteria levels. Phantom treatment planning studies and dosimetry measurements (requiring decommissioning in several cases) are performed to determine the magnitude of dosemore » delivery errors for the FMs (i.e., severity of the FM). Resultant quantitative severity scores are compared to FMEA scores obtained through an international survey and focus group studies. Results: Physical measurements for six FMs have resulted in significant PTV dose errors up to 4.3% as well as close to 1 mm significant distance-to-agreement error between PTV and OAR. Of the 129 survey responses, the vast majority of the responders used Varian machines with Pinnacle and Eclipse planning systems. The average years of experience was 17, yet familiarity with FMEA less than expected. Survey reports perception of dose delivery error magnitude varies widely, in some cases 50% difference in dose delivery error expected amongst respondents. Substantial variance is also seen for all FMs in occurrence, detectability, and severity scores assigned with average variance values of 5.5, 4.6, and 2.2, respectively. Survey shows for MLC positional FM(2mm) average of 7.6% dose error expected (range 0–50%) compared to 2% error seen in measurement. Analysis of ranking in survey, treatment planning studies, and quantitative value comparison will be presented. Conclusion: Resultant quantitative severity scores will expand the utility of FMEA for radiotherapy and verify accuracy of FMEA results compared to highly variable subjective scores.« less

  2. Quality of Impressions and Work Authorizations Submitted by Dental Students Supervised by Prosthodontists and General Dentists.

    PubMed

    Imbery, Terence A; Diaz, Nicholas; Greenfield, Kristy; Janus, Charles; Best, Al M

    2016-10-01

    Preclinical fixed prosthodontics is taught by Department of Prosthodontics faculty members at Virginia Commonwealth University School of Dentistry; however, 86% of all clinical cases in academic year 2012 were staffed by faculty members from the Department of General Practice. The aims of this retrospective study were to quantify the quality of impressions, accuracy of laboratory work authorizations, and most common errors and to determine if there were differences between the rate of errors in cases supervised by the prosthodontists and the general dentists. A total of 346 Fixed Prosthodontic Laboratory Tracking Sheets for the 2012 academic year were reviewed. The results showed that, overall, 73% of submitted impressions were acceptable at initial evaluation, 16% had to be poured first and re-evaluated for quality prior to pindexing, 7% had multiple impressions submitted for transfer dies, and 4% were rejected for poor quality. There were higher acceptance rates for impressions and work authorizations for cases staffed by prosthodontists than by general dentists, but the differences were not statistically significant (p=0.0584 and p=0.0666, respectively). Regarding the work authorizations, 43% overall did not provide sufficient information or had technical errors that delayed prosthesis fabrication. The most common errors were incorrect mountings, absence of solid casts, inadequate description of margins for porcelain fused to metal crowns, inaccurate die trimming, and margin marking. The percentages of errors in cases supervised by general dentists and prosthodontists were similar for 17 of the 18 types of errors identified; only for margin description was the percentage of errors statistically significantly higher for general dentist-supervised than prosthodontist-supervised cases. These results highlighted the ongoing need for faculty development and calibration to ensure students receive the highest quality education from all faculty members teaching fixed prosthodontics.

  3. The effect of sugar and processed food imports on the prevalence of overweight and obesity in 172 countries.

    PubMed

    Lin, Tracy Kuo; Teymourian, Yasmin; Tursini, Maitri Shila

    2018-04-14

    Studies find that economic, political, and social globalization - as well as trade liberalization specifically - influence the prevalence of overweight and obesity in countries through increasing the availability and affordability of unhealthful food. However, what are the mechanisms that connect globalization, trade liberalization, and rising average body mass index (BMI)? We suggest that the various sub-components of globalization interact, leading individuals in countries that experience higher levels of globalization to prefer, import, and consume more imported sugar and processed food products than individuals in countries that experience lower levels of globalization. This study codes the amount of sugar and processed food imports in 172 countries from 1995 to 2010 using the United Nations Comtrade dataset. We employ country-specific fixed effects (FE) models, with robust standard errors, to examine the relationship between sugar and processed foods imports, globalization, and average BMI. To highlight further the relationship between the sugar and processed food import and average BMI, we employ a synthetic control method to calculate a counterfactual average BMI in Fiji. We find that sugar and processed food imports are part of the explanation to increasing average BMI in countries; after controlling for globalization and general imports and exports, sugar and processed food imports have a statistically and substantively significant effect in increasing average BMI. In the case of Fiji, the increased prevalence of obesity is associated with trade agreements and increased imports of sugar and processed food. The counterfactual estimates suggest that sugar and processed food imports are associated with a 0.5 increase in average BMI in Fiji.

  4. Astrometric observations of visual binaries using 26-inch refractor during 2007-2014 at Pulkovo

    NASA Astrophysics Data System (ADS)

    Izmailov, I. S.; Roshchina, E. A.

    2016-04-01

    We present the results of 15184 astrometric observations of 322 visual binaries carried out in 2007-2014 at Pulkovo observatory. In 2007, the 26-inch refractor ( F = 10413 mm, D = 65 cm) was equipped with the CCD camera FLI ProLine 09000 (FOV 12' × 12', 3056 × 3056 pixels, 0.238 arcsec pixel-1). Telescope automation and weather monitoring system installation allowed us to increase the number of observations significantly. Visual binary and multiple systems with an angular distance in the interval 1."1-78."6 with 7."3 on average were included in the observing program. The results were studied in detail for systematic errors using calibration star pairs. There was no detected dependence of errors on temperature, pressure, and hour angle. The dependence of the 26-inch refractor's scale on temperature was taken into account in calculations. The accuracy of measurement of a single CCD image is in the range of 0."0005 to 0."289, 0."021 on average along both coordinates. Mean errors in annual average values of angular distance and position angle are equal to 0."005 and 0.°04 respectively. The results are available here http://izmccd.puldb.ru/vds.htmand in the Strasbourg Astronomical Data Center (CDS). In the catalog, the separations and position angles per night of observation and annual average as well as errors for all the values and standard deviations of a single observation are presented. We present the results of comparison of 50 pairs of stars with known orbital solutions with ephemerides.

  5. Measurement of vertebral rotation: Perdriolle versus Raimondi.

    PubMed

    Weiss, H R

    1995-01-01

    The measurement of vertebral rotation according to Perdriolle is widely used in the French-speaking and Anglo-American countries. Even in this measurement technique there may be a relatively high estimation error because of the not very accurate grading in steps of 5 degrees. The measurement according to Raimondi seems to be easier to use and is more accurate, with 2 degrees steps. The purpose of our study was to determine the technical error of both measuring methods. The apex vertebra of 40 curves on 20 anteroposterior (AP) radiographs were measured by using the Perdriolle torsion meter and the Regolo Raimondi. Interrater and intrarater reliability were computed. The thoracic Cobb angle was 43 degrees, the lumbar Cobb angle 36 degrees. The average rotation according to Perdriolle was 19.1 degrees thoracic (SD 11.14), 12.7 degrees lumbar (11.21). Measurement of vertebral rotation according to Raimondi showed an average rotation of 20.25 degrees in the thoracic region (11.40) and 13.4 degrees lumbar (10.92). The intrarater reliability was r = 0.991 (Perdriolle) and r = 0.997 (Raimondi). The average intrarater error was 1.025 degrees in the Perdriolle measurement and 0.4 degrees in the Raimondi measurement. Interrater error was on average 3.112 degrees for the Perdriolle measurement and 3.630 degrees for the Raimondi measurement. This shows that both methods are useful tools for the follow-up of vertebral rotation as projected on standard X-rays for the experienced clinical. The Raimondi ruler is easier to use and is slightly more reliable.

  6. Forces associated with pneumatic power screwdriver operation: statics and dynamics.

    PubMed

    Lin, Jia-Hua; Radwin, Robert G; Fronczak, Frank J; Richard, Terry G

    2003-10-10

    The statics and dynamics of pneumatic power screwdriver operation were investigated in the context of predicting forces acting against the human operator. A static force model is described in the paper, based on tool geometry, mass, orientation in space, feed force, torque build up, and stall torque. Three common power hand tool shapes are considered, including pistol grip, right angle, and in-line. The static model estimates handle force needed to support a power nutrunner when it acts against the tightened fastener with a constant torque. A system of equations for static force and moment equilibrium conditions are established, and the resultant handle force (resolved in orthogonal directions) is calculated in matrix form. A dynamic model is formulated to describe pneumatic motor torque build-up characteristics dependent on threaded fastener joint hardness. Six pneumatic tools were tested to validate the deterministic model. The average torque prediction error was 6.6% (SD = 5.4%) and the average handle force prediction error was 6.7% (SD = 6.4%) for a medium-soft threaded fastener joint. The average torque prediction error was 5.2% (SD = 5.3%) and the average handle force prediction error was 3.6% (SD = 3.2%) for a hard threaded fastener joint. Use of these equations for estimating handle forces based on passive mechanical elements representing the human operator is also described. These models together should be useful for considering tool handle force in the selection and design of power screwdrivers, particularly for minimizing handle forces in the prevention of injuries and work related musculoskeletal disorders.

  7. An algorithm for management of deep brain stimulation battery replacements: devising a web-based battery estimator and clinical symptom approach.

    PubMed

    Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S

    2013-01-01

    Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.

  8. Hydrologic Design in the Anthropocene

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.; Farmer, W. H.; Read, L.

    2014-12-01

    In an era dubbed the Anthropocene, the natural world is being transformed by a myriad of human influences. As anthropogenic impacts permeate hydrologic systems, hydrologists are challenged to fully account for such changes and develop new methods of hydrologic design. Deterministic watershed models (DWM), which can account for the impacts of changes in land use, climate and infrastructure, are becoming increasing popular for the design of flood and/or drought protection measures. As with all models that are calibrated to existing datasets, DWMs are subject to model error or uncertainty. In practice, the model error component of DWM predictions is typically ignored yet DWM simulations which ignore model error produce model output which cannot reproduce the statistical properties of the observations they are intended to replicate. In the context of hydrologic design, we demonstrate how ignoring model error can lead to systematic downward bias in flood quantiles, upward bias in drought quantiles and upward bias in water supply yields. By reincorporating model error, we document how DWM models can be used to generate results that mimic actual observations and preserve their statistical behavior. In addition to use of DWM for improved predictions in a changing world, improved communication of the risk and reliability is also needed. Traditional statements of risk and reliability in hydrologic design have been characterized by return periods, but such statements often assume that the annual probability of experiencing a design event remains constant throughout the project horizon. We document the general impact of nonstationarity on the average return period and reliability in the context of hydrologic design. Our analyses reveal that return periods do not provide meaningful expressions of the likelihood of future hydrologic events. Instead, knowledge of system reliability over future planning horizons can more effectively prepare society and communicate the likelihood of future hydrologic events of interest.

  9. Improving the analysis of composite endpoints in rare disease trials.

    PubMed

    McMenamin, Martina; Berglind, Anna; Wason, James M S

    2018-05-22

    Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.

  10. Evaluation of ship-based sediment flux measurements by ADCPs in tidal flows

    NASA Astrophysics Data System (ADS)

    Becker, Marius; Maushake, Christian; Grünler, Steffen; Winter, Christian

    2017-04-01

    In the past decades acoustic backscatter calibration developed into a frequently applied technique to measure fluxes of suspended sediments in rivers and estuaries. Data is mainly acquired using single-frequency profiling devices, such as ADCPs. In this case, variations of acoustic particle properties may have a significant impact on the calibration with respect to suspended sediment concentration, but associated effects are rarely considered. Further challenges regarding flux determination arise from incomplete vertical and lateral coverage of the cross-section, and the small ratio of the residual transport to the tidal transport, depending on the tidal prism. We analyzed four sets of 13h cross-sectional ADCP data, collected at different locations in the range of the turbidity zone of the Weser estuary, North Sea, Germany. Vertical LISST, OBS and CTD measurements were taken very hour. During the calibration sediment absorption was taken into account. First, acoustic properties were estimated using LISST particle size distributions. Due to the tidal excursion and displacement of the turbidity zone, acoustic properties of particles changed during the tidal cycle, at all locations. Applying empirical functions, the lowest backscattering cross-section and highest sediment absorption coefficient were found in the center of the turbidity zone. Outside the tidally averaged location of the turbidity zone, changes of acoustic parameters were caused mainly by advection. In the turbidity zone, these properties were also affected by settling and entrainment, inducing vertical differences and systematic errors in concentration. In general, due to the iterative correction of sediment absorption along the acoustic path, local errors in concentration propagate and amplify exponentially. Based on reference concentration obtained from water samples and OBS data, we quantified these errors and their effect on cross-sectional averaged concentration and sediment flux. We found that errors are effectively decreased by applying calibration parameters interpolated in time, and by an optimization of the sediment absorption coefficient. We further discuss practical aspects of residual flux determination in tidal environments and of measuring strategies in relation to site-specific tidal dynamics.

  11. Online quantitative analysis of multispectral images of human body tissues

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.

    2013-08-01

    A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.

  12. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.

    2014-09-12

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less

  13. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  14. Improving Reanalyses Using TRMM and SSM/I-Derived Precipitation and Total Precipitable Water Observations

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.

    1999-01-01

    Global reanalyses currently contain significant errors in the primary fields of the hydrological cycle such as precipitation, evaporation, moisture, and the related cloud fields, especially in the tropics. The Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center has been exploring the use of rainfall and total precipitable water (TPW) observations from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments to improve these fields in reanalyses. The DAO has developed a "1+1"D procedure to assimilate 6-hr averaged rainfall and TPW into the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The algorithm is based on a 6-hr time integration of a column version of the GEOS DAS. The "1+1" designation refers to one spatial dimension plus one temporal dimension. The scheme minimizes the least-square differences between the satellite-retrieved rain rates and those produced by the column model over the 6-hr analysis window. The control variables are analysis increments of moisture within the Incremental Analysis Update (IAU) framework of the GEOS DAS. This 1+1D scheme, in its generalization to four dimensions, is related to the standard 4D variational assimilation but differs in its choice of the control variable. Instead of estimating the initial condition at the beginning of the assimilation cycle, it estimates the constant IAU forcing applied over a 6-hr assimilation cycle. In doing so, it imposes the forecast model as a weak constraint in a manner similar to the variational continuous assimilation techniques. We present results from an experiment in which the observed rain rate and TPW are assumed to be "perfect". They show that assimilating the TMI and SSM/I-derived surface precipitation and TPW observations improves not only the precipitation and moisture fields but also key climate parameters directly linked to convective activities such as clouds, the outgoing longwave radiation, and the large-scale circulation in the tropics. In particular, assimilating these data types reduce the state-dependent systematic errors in the assimilated products. The improved analysis also leads to a better short-range forecast, but the impact is modest compared with improvements in the time-averaged fields. These results suggest that, in the presence of biases and other errors of the forecast model, it is possible to improve the time-averaged "climate content" in the assimilated data without comparable improvements in the short-range forecast skill. Results of this experiment provide a useful benchmark for evaluating error covariance models for optimal use of these data types.

  15. Performance Evaluation of Five Turbidity Sensors in Three Primary Standards

    USGS Publications Warehouse

    Snazelle, Teri T.

    2015-10-28

    Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.

  16. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  17. A Novel A Posteriori Investigation of Scalar Flux Models for Passive Scalar Dispersion in Compressible Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Braman, Kalen; Raman, Venkat

    2011-11-01

    A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.

  18. An Adaptive 6-DOF Tracking Method by Hybrid Sensing for Ultrasonic Endoscopes

    PubMed Central

    Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin

    2014-01-01

    In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°. PMID:24915179

  19. On the timing problem in optical PPM communications.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1971-01-01

    Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.

  20. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    NASA Astrophysics Data System (ADS)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.

  1. Detrending moving average algorithm for multifractals

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  2. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  3. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  4. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  5. Reliability and Validity Assessment of a Linear Position Transducer

    PubMed Central

    Garnacho-Castaño, Manuel V.; López-Lastra, Silvia; Maté-Muñoz, José L.

    2015-01-01

    The objectives of the study were to determine the validity and reliability of peak velocity (PV), average velocity (AV), peak power (PP) and average power (AP) measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain) during two resistance exercises, bench press (BP) and full back squat (BS), performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2). Intraclass correlation coefficients (ICCs) indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W). Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W). Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP) make this device a useful tool for monitoring resistance training. Key points This study determined the validity and reliability of peak velocity, average velocity, peak power and average power measurements made using a linear position transducer The Tendo Weight-lifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and power. PMID:25729300

  6. Application of Molecular Dynamics Simulations in Molecular Property Prediction II: Diffusion Coefficient

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2011-01-01

    In this work, we have evaluated how well the General AMBER force field (GAFF) performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, 5 organic compounds in aqueous solutions, 4 proteins in aqueous solutions, and 9 organic compounds in non-aqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned error (AUE) and the root-mean-square error (RMSE) are 0.137 and 0.171 ×10−5 cm−2s−1, respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for 8 organic solvents with experimental data (R2 = 0.784), 4 proteins in aqueous solutions (R2 = 0.996) and 9 organic compounds in non-aqueous solutions (R2 = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide (DMSO) and cyclohexane have been studied. The major MD settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement (MSD) collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. PMID:21953689

  7. Reynolds-Averaged Turbulence Model Assessment for a Highly Back-Pressured Isolator Flowfield

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Middleton, Troy F.; Wilson, L. G.

    2012-01-01

    The use of computational fluid dynamics in scramjet engine component development is widespread in the existing literature. Unfortunately, the quantification of model-form uncertainties is rarely addressed with anything other than sensitivity studies, requiring that the computational results be intimately tied to and calibrated against existing test data. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Due to ground test facility limitations, this expanded role is believed to be a requirement by some in the test and evaluation community if scramjet engines are to be given serious consideration as a viable propulsion device. An effort has been initiated at the NASA Langley Research Center to validate several turbulence closure models used for Reynolds-averaged simulations of scramjet isolator flows. The turbulence models considered were the Menter BSL, Menter SST, Wilcox 1998, Wilcox 2006, and the Gatski-Speziale explicit algebraic Reynolds stress models. The simulations were carried out using the VULCAN computational fluid dynamics package developed at the NASA Langley Research Center. A procedure to quantify the numerical errors was developed to account for discretization errors in the validation process. This procedure utilized the grid convergence index defined by Roache as a bounding estimate for the numerical error. The validation data was collected from a mechanically back-pressured constant area (1 2 inch) isolator model with an isolator entrance Mach number of 2.5. As expected, the model-form uncertainty was substantial for the shock-dominated, massively separated flowfield within the isolator as evidenced by a 6 duct height variation in shock train length depending on the turbulence model employed. Generally speaking, the turbulence models that did not include an explicit stress limiter more closely matched the measured surface pressures. This observation is somewhat surprising, given that stress-limiting models have generally been developed to better predict shock-separated flows. All of the models considered also failed to properly predict the shape and extent of the separated flow region caused by the shock boundary layer interactions. However, the best performing models were able to predict the isolator shock train length (an important metric for isolator operability margin) to within 1 isolator duct height.

  8. Generalized Fisher matrices

    NASA Astrophysics Data System (ADS)

    Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.

    2014-12-01

    The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.

  9. Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.

    PubMed

    Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim

    2007-09-01

    The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.

  10. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  11. Error rate of automated calculation for wound surface area using a digital photography.

    PubMed

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  13. Statistics of the radiated field of a space-to-earth microwave power transfer system

    NASA Technical Reports Server (NTRS)

    Stevens, G. H.; Leininger, G.

    1976-01-01

    Statistics such as average power density pattern, variance of the power density pattern and variance of the beam pointing error are related to hardware parameters such as transmitter rms phase error and rms amplitude error. Also a limitation on spectral width of the phase reference for phase control was established. A 1 km diameter transmitter appears feasible provided the total rms insertion phase errors of the phase control modules does not exceed 10 deg, amplitude errors do not exceed 10% rms, and the phase reference spectral width does not exceed approximately 3 kHz. With these conditions the expected radiation pattern is virtually the same as the error free pattern, and the rms beam pointing error would be insignificant (approximately 10 meters).

  14. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  15. A Quantum Theoretical Explanation for Probability Judgment Errors

    ERIC Educational Resources Information Center

    Busemeyer, Jerome R.; Pothos, Emmanuel M.; Franco, Riccardo; Trueblood, Jennifer S.

    2011-01-01

    A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector…

  16. Multi-temporal AirSWOT elevations on the Willamette river: error characterization and algorithm testing

    NASA Astrophysics Data System (ADS)

    Tuozzolo, S.; Frasson, R. P. M.; Durand, M. T.

    2017-12-01

    We analyze a multi-temporal dataset of in-situ and airborne water surface measurements from the March 2015 AirSWOT field campaign on the Willamette River in Western Oregon, which included six days of AirSWOT flights over a 75km stretch of the river. We examine systematic errors associated with dark water and layover effects in the AirSWOT dataset, and test the efficacies of different filtering and spatial averaging techniques at reconstructing the water surface profile. Finally, we generate a spatially-averaged time-series of water surface elevation and water surface slope. These AirSWOT-derived reach-averaged values are ingested in a prospective SWOT discharge algorithm to assess its performance on SWOT-like data collected from a borderline SWOT-measurable river (mean width = 90m).

  17. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; hide

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  18. Measurements of aperture averaging on bit-error-rate

    NASA Astrophysics Data System (ADS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-08-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 m. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  19. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  20. Pulse-echo sound speed estimation using second order speckle statistics

    NASA Astrophysics Data System (ADS)

    Rosado-Mendez, Ivan M.; Nam, Kibo; Madsen, Ernest L.; Hall, Timothy J.; Zagzebski, James A.

    2012-10-01

    This work presents a phantom-based evaluation of a method for estimating soft-tissue speeds of sound using pulse-echo data. The method is based on the improvement of image sharpness as the sound speed value assumed during beamforming is systematically matched to the tissue sound speed. The novelty of this work is the quantitative assessment of image sharpness by measuring the resolution cell size from the autocovariance matrix for echo signals from a random distribution of scatterers thus eliminating the need of strong reflectors. Envelope data were obtained from a fatty-tissue mimicking (FTM) phantom (sound speed = 1452 m/s) and a nonfatty-tissue mimicking (NFTM) phantom (1544 m/s) scanned with a linear array transducer on a clinical ultrasound system. Dependence on pulse characteristics was tested by varying the pulse frequency and amplitude. On average, sound speed estimation errors were -0.7% for the FTM phantom and -1.1% for the NFTM phantom. In general, no significant difference was found among errors from different pulse frequencies and amplitudes. The method is currently being optimized for the differentiation of diffuse liver diseases.

  1. Internal quality control: planning and implementation strategies.

    PubMed

    Westgard, James O

    2003-11-01

    The first essential in setting up internal quality control (IQC) of a test procedure in the clinical laboratory is to select the proper IQC procedure to implement, i.e. choosing the statistical criteria or control rules, and the number of control measurements, according to the quality required for the test and the observed performance of the method. Then the right IQC procedure must be properly implemented. This review focuses on strategies for planning and implementing IQC procedures in order to improve the quality of the IQC. A quantitative planning process is described that can be implemented with graphical tools such as power function or critical-error graphs and charts of operating specifications. Finally, a total QC strategy is formulated to minimize cost and maximize quality. A general strategy for IQC implementation is recommended that employs a three-stage design in which the first stage provides high error detection, the second stage low false rejection and the third stage prescribes the length of the analytical run, making use of an algorithm involving the average of normal patients' data.

  2. Analysis of a resistance-energy balance method for estimating daily evaporation from wheat plots using one-time-of-day infrared temperature observations

    NASA Technical Reports Server (NTRS)

    Choudhury, B. J.; Idso, S. B.; Reginato, R. J.

    1986-01-01

    Accurate estimates of evaporation over field-scale or larger areas are needed in hydrologic studies, irrigation scheduling, and meteorology. Remotely sensed surface temperature might be used in a model to calculate evaporation. A resistance-energy balance model, which combines an energy balance equation, the Penman-Monteith (1981) evaporation equation, and van den Honert's (1948) equation for water extraction by plant roots, is analyzed for estimating daily evaporation from wheat using postnoon canopy temperature measurements. Additional data requirements are half-hourly averages of solar radiation, air and dew point temperatures, and wind speed, along with reasonable estimates of canopy emissivity, albedo, height, and leaf area index. Evaporation fluxes were measured in the field by precision weighing lysimeters for well-watered and water-stressed wheat. Errors in computed daily evaporation were generally less than 10 percent, while errors in cumulative evaporation for 10 clear sky days were less than 5 percent for both well-watered and water-stressed wheat. Some results from sensitivity analysis of the model are also given.

  3. Ghost imaging based on Pearson correlation coefficients

    NASA Astrophysics Data System (ADS)

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Li, Long-Zhen; Zhai, Guang-Jie

    2015-05-01

    Correspondence imaging is a new modality of ghost imaging, which can retrieve a positive/negative image by simple conditional averaging of the reference frames that correspond to relatively large/small values of the total intensity measured at the bucket detector. Here we propose and experimentally demonstrate a more rigorous and general approach in which a ghost image is retrieved by calculating a Pearson correlation coefficient between the bucket detector intensity and the brightness at a given pixel of the reference frames, and at the next pixel, and so on. Furthermore, we theoretically provide a statistical interpretation of these two imaging phenomena, and explain how the error depends on the sample size and what kind of distribution the error obeys. According to our analysis, the image signal-to-noise ratio can be greatly improved and the sampling number reduced by means of our new method. Project supported by the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2013YQ030595) and the National High Technology Research and Development Program of China (Grant No. 2013AA122902).

  4. The theory of variational hybrid quantum-classical algorithms

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán

    2016-02-01

    Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.

  5. The Use of a “Hybrid” Trainer in an Established Laparoscopic Skills Program

    PubMed Central

    Colsant, Brian J.; Lynch, Paul J.; Herman, Björn; Klonsky, Jonathan; Young, Steven M.

    2006-01-01

    Objectives: Tabletop inanimate trainers have proven to be a safe, inexpensive, and convenient platform for developing laparoscopic skills. Historically, programs that utilize these trainers rely on subjective evaluation of errors and time as the only measures of performance. Virtual reality simulators offer more extensive data collection capability, but they are expensive and lack realism. This study reviews a new electronic proctor (EP), and its performance within the Rosser Top Gun Laparoscopic Skills and Suturing Program. This “hybrid” training device seeks to capture the strengths of both platforms by providing an affordable, reliable, realistic training arena with metrics to objectively evaluate performance. Methods: An electronic proctor was designed for use in conjunction with drills from the Top Gun Program. The tabletop trainers used were outfitted with an automated electromechanically monitored task arena. Subjects performed 10 repetitions of each of 3 drills: “Cup Drop,” “Triangle Transfer,” and “Intracorporeal Suturing.” In real time, this device evaluates for instrument targeting accuracy, economy of motion, and adherence to the rules of the exercises. A buzzer and flashing light serve to alert the student to inaccuracies and breaches of the defined skill transference parameters. Results: Between July 2001 and June 2003, 117 subjects participated in courses. Seventy-three who met data evaluation criteria were assessed and compared with 744 surgeons who had previously taken the course. The total time to complete each task was significantly longer with the EP in place. The Cup Drop drill with the EP had a mean total time of 1661 seconds (average, 166.10) with 54.49 errors (average, 5.45) vs. 1252 seconds (average, 125.2) without the EP (P=0.000, t=6.735, df=814). The Triangle Transfer drill mean total time was 556 seconds (average, 55.63) and 167.57 errors (average. 16.75) (EP) vs. 454 seconds (non-EP) (average. 45.4) (P=0.000, t=4.447, df=814). The mean total times of the suturing task was 1777 seconds (average, 177.73) and 90.46 errors (average. 9.04) (EP) vs. 1682 seconds (non-EP) (average, 168.2) (P=0.040, t=1.150, df=814). When compared with surgeons who had participated in the Top Gun course prior to EP, the participants in the study collectively scored in the 18.3th percentile with the Cup Drop drill, 22.6th percentile with the Triangle Transfer drill, and 36.7th percentile with the Intracorporeal Suturing exercise. When penalizing for errors recorded by the EP, participants scored collectively in the 9.9th, 0.1th, and 17.7th percentile, respectively. No equipment failures occurred, and the agenda of the course did not have to be modified to accommodate the new platform. Conclusions: The EP utilized during the Top Gun Course was introduced without modification of the core curriculum and experienced no device failures. This hybrid trainer offers a cost-effective inanimate simulator that brings quality performance monitoring to traditional inanimate trainers. It appears that the EP influenced student performance by alerting them to errors made, thus causing an increased awareness of and focus on precision and accuracy. This suggests that the EP could have internal guidance capabilities. However, validation studies must be done in the future. PMID:16709348

  6. Toward attenuating the impact of arm positions on electromyography pattern-recognition based motion classification in transradial amputees

    PubMed Central

    2012-01-01

    Background Electromyography (EMG) pattern-recognition based control strategies for multifunctional myoelectric prosthesis systems have been studied commonly in a controlled laboratory setting. Before these myoelectric prosthesis systems are clinically viable, it will be necessary to assess the effect of some disparities between the ideal laboratory setting and practical use on the control performance. One important obstacle is the impact of arm position variation that causes the changes of EMG pattern when performing identical motions in different arm positions. This study aimed to investigate the impacts of arm position variation on EMG pattern-recognition based motion classification in upper-limb amputees and the solutions for reducing these impacts. Methods With five unilateral transradial (TR) amputees, the EMG signals and tri-axial accelerometer mechanomyography (ACC-MMG) signals were simultaneously collected from both amputated and intact arms when performing six classes of arm and hand movements in each of five arm positions that were considered in the study. The effect of the arm position changes was estimated in terms of motion classification error and compared between amputated and intact arms. Then the performance of three proposed methods in attenuating the impact of arm positions was evaluated. Results With EMG signals, the average intra-position and inter-position classification errors across all five arm positions and five subjects were around 7.3% and 29.9% from amputated arms, respectively, about 1.0% and 10% low in comparison with those from intact arms. While ACC-MMG signals could yield a similar intra-position classification error (9.9%) as EMG, they had much higher inter-position classification error with an average value of 81.1% over the arm positions and the subjects. When the EMG data from all five arm positions were involved in the training set, the average classification error reached a value of around 10.8% for amputated arms. Using a two-stage cascade classifier, the average classification error was around 9.0% over all five arm positions. Reducing ACC-MMG channels from 8 to 2 only increased the average position classification error across all five arm positions from 0.7% to 1.0% in amputated arms. Conclusions The performance of EMG pattern-recognition based method in classifying movements strongly depends on arm positions. This dependency is a little stronger in intact arm than in amputated arm, which suggests that the investigations associated with practical use of a myoelectric prosthesis should use the limb amputees as subjects instead of using able-body subjects. The two-stage cascade classifier mode with ACC-MMG for limb position identification and EMG for limb motion classification may be a promising way to reduce the effect of limb position variation on classification performance. PMID:23036049

  7. Forecasting influenza in Hong Kong with Google search queries and statistical model fusion

    PubMed Central

    Ramirez Ramirez, L. Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung

    2017-01-01

    Background The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Methods Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. Results DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. Conclusions The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient. PMID:28464015

  8. Landsat-8 TIRS thermal radiometric calibration status

    USGS Publications Warehouse

    Barsi, Julia A.; Markham, Brian L.; Montanaro, Matthew; Gerace, Aaron; Hook, Simon; Schott, John R.; Raqueno, Nina G.; Morfitt, Ron

    2017-01-01

    The Thermal Infrared Sensor (TIRS) instrument is the thermal-band imager on the Landsat-8 platform. The initial onorbit calibration estimates of the two TIRS spectral bands indicated large average radiometric calibration errors, -0.29 and -0.51 W/m2 sr μm or -2.1K and -4.4K at 300K in Bands 10 and 11, respectively, as well as high variability in the errors, 0.87K and 1.67K (1-σ), respectively. The average error was corrected in operational processing in January 2014, though, this adjustment did not improve the variability. The source of the variability was determined to be stray light from far outside the field of view of the telescope. An algorithm for modeling the stray light effect was developed and implemented in the Landsat-8 processing system in February 2017. The new process has improved the overall calibration of the two TIRS bands, reducing the residual variability in the calibration from 0.87K to 0.51K at 300K for Band 10 and from 1.67K to 0.84K at 300K for Band 11. There are residual average lifetime bias errors in each band: 0.04 W/m2 sr μm (0.30K) and -0.04 W/m2 sr μm (-0.29K), for Bands 10 and 11, respectively.

  9. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    PubMed

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  10. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  11. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  12. Increased error-related thalamic activity during early compared to late cocaine abstinence.

    PubMed

    Li, Chiang-Shan R; Luo, Xi; Sinha, Rajita; Rounsaville, Bruce J; Carroll, Kathleen M; Malison, Robert T; Ding, Yu-Shin; Zhang, Sheng; Ide, Jaime S

    2010-06-01

    Altered cognitive control is implicated in the shaping of cocaine dependence. One of the key component processes of cognitive control is error monitoring. Our previous imaging work highlighted greater activity in distinct cortical and subcortical regions including the dorsal anterior cingulate cortex (dACC), thalamus and insula when participants committed an error during the stop signal task (Li et al., 2008b). Importantly, dACC, thalamic and insular activity has been associated with drug craving. One hypothesis is that the intense interoceptive activity during craving prevents these cerebral structures from adequately registering error and/or monitoring performance. Alternatively, the dACC, thalamus and insula show abnormally heightened responses to performance errors, suggesting that excessive responses to salient stimuli such as drug cues could precipitate craving. The two hypotheses would each predict decreased and increased activity during stop error (SE) as compared to stop success (SS) trials in the SST. Here we showed that cocaine dependent patients (PCD) experienced greater subjective feeling of loss of control and cocaine craving during early (average of day 6) compared to late (average of day 18) abstinence. Furthermore, compared to PCD during late abstinence, PCD scanned during early abstinence showed increased thalamic as well as insular but not dACC responses to errors (SE>SS). These findings support the hypothesis that heightened thalamic reactivity to salient stimuli co-occur with cocaine craving and loss of self control. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Reducing Errors in Satellite Simulated Views of Clouds with an Improved Parameterization of Unresolved Scales

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Marchand, R.; Ackerman, T. P.

    2016-12-01

    Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A

  14. SU-E-T-646: Quality Assurance of Truebeam Multi-Leaf Collimator Using a MLC QA Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Lu, J; Hong, D

    2015-06-15

    Purpose: To perform a routine quality assurance procedure for Truebeam multi-leaf collimator (MLC) using MLC QA phantom, verify the stability and reliability of MLC during the treatment. Methods: MLC QA phantom is a specialized phantom for MLC quality assurance (QA), and contains five radio-opaque spheres that are embedded in an “L” shape. The phantom was placed isocentrically on the Truebeam treatment couch for the tests. A quality assurance plan was setted up in the Eclipse v10.0, the fields that need to be delivered in order to acquire the necessary images, the MLC shapes can then be obtained by the images.more » The images acquired by the electronic portal imaging device (EPID), and imported into the PIPSpro software for the analysis. The tests were delivered twelve weeks (once a week) to verify consistency of the delivery, and the images are acquired in the same manner each time. Results: For the Leaf position test, the average position error was 0.23mm±0.02mm (range: 0.18mm∼0.25mm). The Leaf width was measured at the isocenter, the average error was 0.06mm±0.02mm (range: 0.02mm∼0.08mm) for the Leaf width test. Multi-Port test showed the dynamic leaf shift error, the average error was 0.28mm±0.03mm (range: 0.2mm∼0.35mm). For the leaf transmission test, the average inter-leaf leakage value was 1.0%±0.17% (range: 0.8%∼1.3%) and the average inter-bank leakage value was 32.6%±2.1% (range: 30.2%∼36.1%). Conclusion: By the test of 12 weeks, the MLC system of the Truebeam is running in a good condition and the MLC system can be steadily and reliably carried out during the treatment. The MLC QA phantom is a useful test tool for the MLC QA.« less

  15. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    PubMed

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chengqiang, L; Yin, Y; Chen, L

    Purpose: To investigate the impact of MLC position errors on simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) for patients with nasopharyngeal carcinoma. Methods: To compare the dosimetric differences between the simulated plans and the clinical plans, ten patients with locally advanced NPC treated with SIB-IMRT were enrolled in this study. All plans were calculated with an inverse planning system (Pinnacle3, Philips Medical System{sub )}. Random errors −2mm to 2mm{sub )},shift errors{sub (} 2mm,1mm and 0.5mm) and systematic extension/ contraction errors (±2mm, ±1mm and ±0.5mm) of the MLC leaf position were introduced respectively into the original plans to create the simulated plans.more » Dosimetry factors were compared between the original and the simulated plans. Results: The dosimetric impact of the random and system shift errors of MLC position was insignificant within 2mm, the maximum changes in D95% of PGTV,PTV1,PTV2 were-0.92±0.51%,1.00±0.24% and 0.62±0.17%, the maximum changes in the D0.1cc of spinal cord and brainstem were 1.90±2.80% and −1.78±1.42%, the maximum changes in the Dmean of parotids were1.36±1.23% and −2.25±2.04%.However,the impact of MLC extension or contraction errors was found significant. For 2mm leaf extension errors, the average changes in D95% of PGTV,PTV1,PTV2 were 4.31±0.67%,4.29±0.65% and 4.79±0.82%, the averaged value of the D0.1cc to spinal cord and brainstem were increased by 7.39±5.25% and 6.32±2.28%,the averaged value of the mean dose to left and right parotid were increased by 12.75±2.02%,13.39±2.17% respectively. Conclusion: The dosimetric effect was insignificant for random MLC leaf position errors up to 2mm. There was a high sensitivity to dose distribution for MLC extension or contraction errors.We should pay attention to the anatomic changes in target organs and anatomical structures during the course,individual radiotherapy was recommended to ensure adaptive doses.« less

  17. Quantifying the impact of daily and seasonal variation in sap pH on xylem dissolved inorganic carbon estimates in plum trees.

    PubMed

    Erda, F G; Bloemen, J; Steppe, K

    2014-01-01

    In studies on internal CO2 transport, average xylem sap pH (pH(x)) is one of the factors used for calculation of the concentration of dissolved inorganic carbon in the xylem sap ([CO2 *]). Lack of detailed pH(x) measurements at high temporal resolution could be a potential source of error when evaluating [CO2*] dynamics. In this experiment, we performed continuous measurements of CO2 concentration ([CO2]) and stem temperature (T(stem)), complemented with pH(x) measurements at 30-min intervals during the day at various stages of the growing season (Day of the Year (DOY): 86 (late winter), 128 (mid-spring) and 155 (early summer)) on a plum tree (Prunus domestica L. cv. Reine Claude d'Oullins). We used the recorded pH(x) to calculate [CO2*] based on T(stem) and the corresponding measured [CO2]. No statistically significant difference was found between mean [CO2*] calculated with instantaneous pH(x) and daily average pH(x). However, using an average pH(x) value from a different part of the growing season than the measurements of [CO2] and T(stem) to estimate [CO2*] led to a statistically significant error. The error varied between 3.25 ± 0.01% under-estimation and 3.97 ± 0.01% over-estimation, relative to the true [CO2*] data. Measured pH(x) did not show a significant daily variation, unlike [CO2], which increased during the day and declined at night. As the growing season progressed, daily average [CO2] (3.4%, 5.3%, 7.4%) increased and average pH(x) (5.43, 5.29, 5.20) decreased. Increase in [CO2] will increase its solubility in xylem sap according to Henry's law, and the dissociation of [CO2*] will negatively affect pH(x). Our results are the first quantifying the error in [CO2*] due to the interaction between [CO2] and pH(x) on a seasonal time scale. We found significant changes in pH(x) across the growing season, but overall the effect on the calculation of [CO2*] remained within an error range of 4%. However, it is possible that the error could be more substantial for other tree species, particularly if pH(x) is in the more sensitive range (pH(x) > 6.5). © 2013 German Botanical Society and The Royal Botanical Society of the Netherlands.

  18. Generalized Procedure for Improved Accuracy of Thermal Contact Resistance Measurements for Materials With Arbitrary Temperature-Dependent Thermal Conductivity

    DOE PAGES

    Sayer, Robert A.

    2014-06-26

    Thermal contact resistance (TCR) is most commonly measured using one-dimensional steady-state calorimetric techniques. In the experimental methods we utilized, a temperature gradient is applied across two contacting beams and the temperature drop at the interface is inferred from the temperature profiles of the rods that are measured at discrete points. During data analysis, thermal conductivity of the beams is typically taken to be an average value over the temperature range imposed during the experiment. Our generalized theory is presented and accounts for temperature-dependent changes in thermal conductivity. The procedure presented enables accurate measurement of TCR for contacting materials whose thermalmore » conductivity is any arbitrary function of temperature. For example, it is shown that the standard technique yields TCR values that are about 15% below the actual value for two specific examples of copper and silicon contacts. Conversely, the generalized technique predicts TCR values that are within 1% of the actual value. The method is exact when thermal conductivity is known exactly and no other errors are introduced to the system.« less

  19. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE PAGES

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...

    2017-02-15

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  20. Creating illusions of knowledge: learning errors that contradict prior knowledge.

    PubMed

    Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J

    2013-02-01

    Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. 2013 APA, all rights reserved

  1. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    PubMed Central

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter

    2017-01-01

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466

  2. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  3. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  4. Real-Time Identification of Wheel Terrain Interaction Models for Enhanced Autonomous Vehicle Mobility

    DTIC Science & Technology

    2014-04-24

    tim at io n Er ro r ( cm ) 0 2 4 6 8 10 Color Statistics Angelova...Color_Statistics_Error) / Average_Slip_Error Position Estimation Error: Global Pose Po si tio n Es tim at io n Er ro r ( cm ) 0 2 4 6 8 10 12 Color...get some kind of clearance for releasing pose and odometry data) collected at the following sites – Taylor, Gascola, Somerset, Fort Bliss and

  5. Standardized Protocol for Virtual Surgical Plan and 3-Dimensional Surgical Template-Assisted Single-Stage Mandible Contour Surgery.

    PubMed

    Fu, Xi; Qiao, Jia; Girod, Sabine; Niu, Feng; Liu, Jian Feng; Lee, Gordon K; Gui, Lai

    2017-09-01

    Mandible contour surgery, including reduction gonioplasty and genioplasty, has become increasingly popular in East Asia. However, it is technically challenging and, hence, leads to a long learning curve and high complication rates and often needs secondary revisions. The increasing use of 3-dimensional (3D) technology makes accurate single-stage mandible contour surgery with minimum complication rates possible with a virtual surgical plan (VSP) and 3-D surgical templates. This study is to establish a standardized protocol for VSP and 3-D surgical templates-assisted mandible contour surgery and evaluate the accuracy of the protocol. In this study, we enrolled 20 patients for mandible contour surgery. Our protocol is to perform VSP based on 3-D computed tomography data. Then, design and 3-D print surgical templates based on preoperative VSP. The accuracy of the method was analyzed by 3-D comparison of VSP and postoperative results using detailed computer analysis. All patients had symmetric, natural osteotomy lines and satisfactory facial ratios in a single-stage operation. The average relative error of VSP and postoperative result on the entire skull was 0.41 ± 0.13 mm. The average new left gonial error was 0.43 ± 0.77 mm. The average new right gonial error was 0.45 ± 0.69 mm. The average pognion error was 0.79 ± 1.21 mm. Patients were very satisfied with the aesthetic results. Surgeons were very satisfied with the performance of surgical templates to facilitate the operation. Our standardized protocol of VSP and 3-D printed surgical templates-assisted single-stage mandible contour surgery results in accurate, safe, and predictable outcome in a single stage.

  6. The early results of excimer laser photorefractive keratectomy for compound myopic astigmatism.

    PubMed

    Horgan, S E; Pearson, R V

    1996-01-01

    An excimer laser (VISX Twenty/Twenty Excimer Refractive System) was used to treat 51 eyes for myopia and astigmatism. Uncorrected pretreatment visual acuity was between 6/18 and 6/60 (log unit +0.45 to +1.0) in 59% and worse than 6/60 in 29%. The mean pretreatment spherical refractive error was -4.05 dioptre (range 1.25 to 13.25), and the mean pretreatment cylindrical error was -0.97 dioptre (range 0.25 to 4.00). Uncorrected visual acuity measured 6/6 or better (log unit 0.0 or less) in 80% at three months, and averaged 6/6 for all eyes at six months post-treatment, with 75% eyes obtaining 6/6 or better. The mean post-treatment spherical error decayed according to pre-treatment values, with a mean sphere of -0.20 dioptre for eyes initially less than -2.00 dioptre, -0.40 dioptre (for those between -2.25 and -3.00), -0.71 dioptre (for those between -4.25 and -5.00), and -1.15 dioptre for eyes initially above -6.25 dioptre. Vectored cylindrical correction exhibited response proportional to initial refraction, with a mean post-treatment cylinder of -1.83 dioptre for eyes formerly averaging -3.08 dioptre, -0.55 dioptre (eyes initially averaging -1.63 dioptre), and -0.51 dioptre (eyes initially averaging -0.67 dioptre). Vector analysis of post-treatment astigmatism showed 58% eyes exhibiting 51 or more degrees of axis shift, although 34% eyes remained within 20 degrees of their pretreatment axis. An effective reduction in spherocylindrical error was achieved with all eyes, although axis misalignment was a common event.

  7. Refractive errors in patients with newly diagnosed diabetes mellitus.

    PubMed

    Yarbağ, Abdülhekim; Yazar, Hayrullah; Akdoğan, Mehmet; Pekgör, Ahmet; Kaleli, Suleyman

    2015-01-01

    Diabetes mellitus is a complex metabolic disorder that involves the small blood vessels, often causing widespread damage to tissues, including the eyes' optic refractive error. In patients with newly diagnosed diabetes mellitus who have unstable blood glucose levels, refraction may be incorrect. We aimed to investigate refraction in patients who were recently diagnosed with diabetes and treated at our centre. This prospective study was performed from February 2013 to January 2014. Patients were diagnosed with diabetes mellitus using laboratory biochemical tests and clinical examination. Venous fasting plasma glucose (fpg) levels were measured along with refractive errors. Two measurements were taken: initially and after four weeks. The last difference between the initial and end refractive measurements were evaluated. Our patients were 100 males and 30 females who had been newly diagnosed with type II DM. The refractive and fpg levels were measured twice in all patients. The average values of the initial measurements were as follows: fpg level, 415 mg/dl; average refractive value, +2.5 D (Dioptres). The average end of period measurements were fpg, 203 mg/dl; average refractive value, +0.75 D. There is a statistically significant difference between after four weeks measurements with initially measurements of fasting plasma glucose (fpg) levels (p<0.05) and there is a statistically significant relationship between changes in fpg changes with glasses ID (p<0.05) and the disappearance of blurred vision (to be greater than 50% success rate) were statistically significant (p<0.05). Also, were detected upon all these results the absence of any age and sex effects (p>0.05). Refractive error is affected in patients with newly diagnosed diabetes mellitus; therefore, plasma glucose levels should be considered in the selection of glasses.

  8. Quality assurance of dynamic parameters in volumetric modulated arc therapy.

    PubMed

    Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N

    2012-07-01

    The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Three tests (for gantry position-dose delivery synchronisation, gantry speed-dose delivery synchronisation and MLC leaf speed and positions) were performed. The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the "beginning" and "end" errors. For MLC position verification, the maximum error was -2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. This experiment demonstrates that the variables and parameters of the Synergy S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC.

  9. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio

    NASA Astrophysics Data System (ADS)

    Charonko, John J.; Vlachos, Pavlos P.

    2013-06-01

    Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost.

  10. Multimodel ensembles of wheat growth: many models are better than one.

    PubMed

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W; Rötter, Reimund P; Boote, Kenneth J; Ruane, Alex C; Thorburn, Peter J; Cammarano, Davide; Hatfield, Jerry L; Rosenzweig, Cynthia; Aggarwal, Pramod K; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richie; Grant, Robert F; Heng, Lee; Hooker, Josh; Hunt, Leslie A; Ingwersen, Joachim; Izaurralde, Roberto C; Kersebaum, Kurt Christian; Müller, Christoph; Kumar, Soora Naresh; Nendel, Claas; O'leary, Garry; Olesen, Jørgen E; Osborne, Tom M; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Semenov, Mikhail A; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; White, Jeffrey W; Wolf, Joost

    2015-02-01

    Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models. © 2014 John Wiley & Sons Ltd.

  11. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  12. Multimodel Ensembles of Wheat Growth: More Models are Better than One

    NASA Technical Reports Server (NTRS)

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alex C.; Thorburn, Peter J.; Cammarano, Davide; hide

    2015-01-01

    Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.

  13. Multimodel Ensembles of Wheat Growth: Many Models are Better than One

    NASA Technical Reports Server (NTRS)

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alexander C.; Thorburn, Peter J.; Cammarano, Davide; hide

    2015-01-01

    Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop model scan give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 2438 for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.

  14. Improving Arterial Spin Labeling by Using Deep Learning.

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2018-05-01

    Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.

  15. Mapping DNA polymerase errors by single-molecule sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David F.; Lu, Jenny; Chang, Seungwoo

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  16. Mapping DNA polymerase errors by single-molecule sequencing

    DOE PAGES

    Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...

    2016-05-16

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  17. August median streamflow on ungaged streams in Eastern Coastal Maine

    USGS Publications Warehouse

    Lombard, Pamela J.

    2004-01-01

    Methods for estimating August median streamflow were developed for ungaged, unregulated streams in eastern coastal Maine. The methods apply to streams with drainage areas ranging in size from 0.04 to 73.2 square miles and fraction of basin underlain by a sand and gravel aquifer ranging from 0 to 71 percent. The equations were developed with data from three long-term (greater than or equal to 10 years of record) continuous-record streamflow-gaging stations, 23 partial-record streamflow- gaging stations, and 5 short-term (less than 10 years of record) continuous-record streamflow-gaging stations. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record streamflow-gaging stations and short-term continuous-record streamflow-gaging stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term continuous-record streamflow-gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at streamflow-gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for different periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Thirty-one stations were used for the final regression equations. Two basin characteristics?drainage area and fraction of basin underlain by a sand and gravel aquifer?are used in the calculated regression equation to estimate August median streamflow for ungaged streams. The equation has an average standard error of prediction from -27 to 38 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -30 to 43 percent. Model error is larger than sampling error for both equations, indicating that additional or improved estimates of basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow at partial- record or continuous-record gaging stations range from 0.003 to 31.0 cubic feet per second or from 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in eastern coastal Maine, within the range of acceptable explanatory variables, range from 0.003 to 45 cubic feet per second or 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as drainage area and fraction of basin underlain by a sand and gravel aquifer increase.

  18. CORRIGENDUM of the MJO Transition from Shallow to Deep Convection in Cloudsat-Calipso Data and GISS GCM Simulations

    NASA Technical Reports Server (NTRS)

    Del Genio, Anthony; Chen, Yonghua; Kim, Daehyun; Yao, Mao-Sung

    2015-01-01

    We have identified several errors in the calculations that were performed to create Fig. 3 of Del Genio et al. (2012). These errors affect the composite evolution of precipitation and column water vapor versus lag relative to the Madden-Julian oscillation (MJO) peak presented in that figure. The precipitation and column water vapor data for the April and November 2009 MJO events were composited incorrectly because the date of the MJO peak at a given longitude was assigned to the incorrect longitude band. In addition, the precipitation data for all MJO events were first accumulated daily and the daily accumulations averaged at each lag to create the composite, rather than the averaging of instantaneous values that was used for other composite figures in the paper. One poorly sampled day in the west Pacific therefore biases the composite precipitation in that region at several lags after the MJO peak. Finally, a 4-day running mean was mistakenly applied to the precipitation and column water vapor data rather than the intended 5-day running mean. The results of the corrections are that an anomalous west Pacific precipitation maximum510 days after the MJO peak is removed and the maximum in west Pacific precipitation one pentad before the MJO peak is now more evident; there is now a clear maximum in precipitation for the entire warm pool one pentad before the MJO peak; west Pacific column water vapor now varies more strongly as a function of lag relative to the peak; and precipitation, and to a lesser extent column water vapor, in general vary more smoothly with time. The corrections do not affect any other parts of the paper nor do they change the scientific conclusions we reached. The 4-day running mean error also affects Figs. 1 and 2 therein, with almost imperceptible impacts that do not affect any results or necessitate major changes to the text.

  19. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254

  20. Combining automated peak tracking in SAR by NMR with structure-based backbone assignment from 15N-NOESY

    PubMed Central

    2012-01-01

    Background Chemical shift mapping is an important technique in NMR-based drug screening for identifying the atoms of a target protein that potentially bind to a drug molecule upon the molecule's introduction in increasing concentrations. The goal is to obtain a mapping of peaks with known residue assignment from the reference spectrum of the unbound protein to peaks with unknown assignment in the target spectrum of the bound protein. Although a series of perturbed spectra help to trace a path from reference peaks to target peaks, a one-to-one mapping generally is not possible, especially for large proteins, due to errors, such as noise peaks, missing peaks, missing but then reappearing, overlapped, and new peaks not associated with any peaks in the reference. Due to these difficulties, the mapping is typically done manually or semi-automatically, which is not efficient for high-throughput drug screening. Results We present PeakWalker, a novel peak walking algorithm for fast-exchange systems that models the errors explicitly and performs many-to-one mapping. On the proteins: hBclXL, UbcH5B, and histone H1, it achieves an average accuracy of over 95% with less than 1.5 residues predicted per target peak. Given these mappings as input, we present PeakAssigner, a novel combined structure-based backbone resonance and NOE assignment algorithm that uses just 15N-NOESY, while avoiding TOCSY experiments and 13C-labeling, to resolve the ambiguities for a one-to-one mapping. On the three proteins, it achieves an average accuracy of 94% or better. Conclusions Our mathematical programming approach for modeling chemical shift mapping as a graph problem, while modeling the errors directly, is potentially a time- and cost-effective first step for high-throughput drug screening based on limited NMR data and homologous 3D structures. PMID:22536902

Top