Sample records for water application errors

  1. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  2. Pedal Application Errors

    DOT National Transportation Integrated Search

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  3. Theoretical and experimental errors for in situ measurements of plant water potential.

    PubMed

    Shackel, K A

    1984-07-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.

  4. Analyzing the errors of DFT approximations for compressed water systems

    NASA Astrophysics Data System (ADS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the

  5. Errors in measuring water potentials of small samples resulting from water adsorption by thermocouple psychrometer chambers.

    PubMed

    Bennett, J M; Cortes, P M

    1985-09-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.

  6. Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1

    PubMed Central

    Shackel, Kenneth A.

    1984-01-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701

  7. Analyzing the errors of DFT approximations for compressed water systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the

  8. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  9. Errors in Measuring Water Potentials of Small Samples Resulting from Water Adsorption by Thermocouple Psychrometer Chambers 1

    PubMed Central

    Bennett, Jerry M.; Cortes, Peter M.

    1985-01-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367

  10. Comparison of the resulting error in data fusion techniques when used with remote sensing, earth observation, and in-situ data sets for water quality applications

    NASA Astrophysics Data System (ADS)

    Ziemba, Alexander; El Serafy, Ghada

    2016-04-01

    Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance

  11. Effects of vertical distribution of water vapor and temperature on total column water vapor retrieval error

    NASA Technical Reports Server (NTRS)

    Sun, Jielun

    1993-01-01

    Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.

  12. Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.

    PubMed

    Barrs, H D

    1965-07-02

    A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.

  13. Systematic Error in Leaf Water Potential Measurements with a Thermocouple Psychrometer.

    PubMed

    Rawlins, S L

    1964-10-30

    To allow for the error in measurement of water potentials in leaves, introduced by the presence of a water droplet in the chamber of the psychrometer, a correction must be made for the permeability of the leaf.

  14. Apoplastic water fraction and rehydration techniques introduce significant errors in measurements of relative water content and osmotic potential in plant leaves.

    PubMed

    Arndt, Stefan K; Irawan, Andi; Sanders, Gregor J

    2015-12-01

    Relative water content (RWC) and the osmotic potential (π) of plant leaves are important plant traits that can be used to assess drought tolerance or adaptation of plants. We estimated the magnitude of errors that are introduced by dilution of π from apoplastic water in osmometry methods and the errors that occur during rehydration of leaves for RWC and π in 14 different plant species from trees, grasses and herbs. Our data indicate that rehydration technique and length of rehydration can introduce significant errors in both RWC and π. Leaves from all species were fully turgid after 1-3 h of rehydration and increasing the rehydration time resulted in a significant underprediction of RWC. Standing rehydration via the petiole introduced the least errors while rehydration via floating disks and submerging leaves for rehydration led to a greater underprediction of RWC. The same effect was also observed for π. The π values following standing rehydration could be corrected by applying a dilution factor from apoplastic water dilution using an osmometric method but not by using apoplastic water fraction (AWF) from pressure volume (PV) curves. The apoplastic water dilution error was between 5 and 18%, while the two other rehydration methods introduced much greater errors. We recommend the use of the standing rehydration method because (1) the correct rehydration time can be evaluated by measuring water potential, (2) overhydration effects were smallest, and (3) π can be accurately corrected by using osmometric methods to estimate apoplastic water dilution. © 2015 Scandinavian Plant Physiology Society.

  15. Estimation of the optical errors on the luminescence imaging of water for proton beam

    NASA Astrophysics Data System (ADS)

    Yabe, Takuya; Komori, Masataka; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi

    2018-04-01

    Although luminescence imaging of water during proton-beam irradiation can be applied to range estimation, the height of the Bragg peak of the luminescence image was smaller than that measured with an ionization chamber. We hypothesized that the reasons of the difference were attributed to the optical phenomena; parallax errors of the optical system and the reflection of the luminescence from the water phantom. We estimated the errors cause by these optical phenomena affecting the luminescence image of water. To estimate the parallax error on the luminescence images, we measured the luminescence images during proton-beam irradiation using a cooled charge-coupled camera by changing the heights of the optical axis of the camera from those of the Bragg peak. When the heights of the optical axis matched to the depths of the Bragg peak, the Bragg peak heights in the depth profiles were the highest. The reflection of the luminescence of water with a black wall phantom was slightly smaller than that with a transparent phantom and changed the shapes of the depth profiles. We conclude that the parallax error significantly affects the heights of the Bragg peak and the reflection of the phantom affects the shapes of depth profiles of the luminescence images of water.

  16. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing

  17. Software reliability: Application of a reliability model to requirements error analysis

    NASA Technical Reports Server (NTRS)

    Logan, J.

    1980-01-01

    The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.

  18. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  19. 26 CFR 1.1312-8 - Law applicable in determination of error.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Law applicable in determination of error. The question whether there was an erroneous inclusion... provisions of the internal revenue laws applicable with respect to the year as to which the inclusion.... The fact that the inclusion, exclusion, omission, allowance, disallowance, recognition, or...

  20. Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Yu, Chao; Cai, Miaomiao

    2017-04-01

    GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic

  1. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  2. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under

  3. FIBER AND INTEGRATED OPTICS, LASER APPLICATIONS, AND OTHER PROBLEMS IN QUANTUM ELECTRONICS: Raman scattering spectra recorded in the course of the water-ice phase transition and laser diagnostics of heterophase water systems

    NASA Astrophysics Data System (ADS)

    Glushkov, S. M.; Panchishin, I. M.; Fadeev, V. V.

    1989-04-01

    The method of laser Raman spectroscopy was used to study heterophase water systems. The apparatus included an argon laser, an optical multichannel analyzer, and a microcomputer. The temperature dependences of the profiles of the valence (stretching) band in the Raman spectrum of liquid water between + 50 °C and - 7 °C and of polycrystalline ice Ih (from 0 to - 62 °C) were determined, as well as the spectral polarization characteristics of the Raman valence band. A method was developed for the determination of the partial concentrations of the H2O molecules in liquid and solid phases present as a mixture. An analysis was made of the errors of the method and the sources of these errors. Applications of the method to multiparameter problems in more complex water systems (for example, solutions of potassium iodide in water) were considered. Other potential practical applications of the method were discussed.

  4. Refractive Errors in Northern China Between the Residents with Drinking Water Containing Excessive Fluorine and Normal Drinking Water.

    PubMed

    Bin, Ge; Liu, Haifeng; Zhao, Chunyuan; Zhou, Guangkai; Ding, Xuchen; Zhang, Na; Xu, Yongfang; Qi, Yanhua

    2016-10-01

    The purpose of this study was to evaluate the refractive errors and the demographic associations between drinking water with excessive fluoride and normal drinking water among residents in Northern China. Of the 1843 residents, 1415 (aged ≥40 years) were divided into drinking-water-excessive fluoride (DWEF) group (>1.20 mg/L) and control group (≤1.20 mg/L) on the basis of the fluoride concentrations in drinking water. Of the 221 subjects in the DWEF group, with 1.47 ± 0.25 mg/L (fluoride concentrations in drinking water), the prevalence rates of myopia, hyperopia, and astigmatism were 38.5 % (95 % confidence interval [CI] = 32.1-45.3), 19.9 % (95 % CI = 15-26), and 41.6 % (95 % CI = 35.1-48.4), respectively. Of the 1194 subjects in the control group with 0.20 ± 0.18 mg/L, the prevalence of myopia, hyperopia, and astigmatism were 31.5 % (95 % CI = 28.9-34.2), 27.6 % (95 % CI = 25.1-30.3), and 45.6 % (95 % CI = 42.8-48.5), respectively. A statistically significant difference was not observed in the association of spherical equivalent and fluoride concentrations in drinking water (P = 0.84 > 0.05). This report provides the data of the refractive state of the residents consuming drinking water with excess amounts of fluoride in northern China. The refractive errors did not result from ingestion of mild excess amounts of fluoride in the drinking water.

  5. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    NASA Technical Reports Server (NTRS)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  6. Methodology of functionality selection for water management software and examples of its application.

    PubMed

    Vasilyev, K N

    2013-01-01

    When developing new software products and adapting existing software, project leaders have to decide which functionalities to keep, adapt or develop. They have to consider that the cost of making errors during the specification phase is extremely high. In this paper a formalised approach is proposed that considers the main criteria for selecting new software functions. The application of this approach minimises the chances of making errors in selecting the functions to apply. Based on the work on software development and support projects in the area of water resources and flood damage evaluation in economic terms at CH2M HILL (the developers of the flood modelling package ISIS), the author has defined seven criteria for selecting functions to be included in a software product. The approach is based on the evaluation of the relative significance of the functions to be included into the software product. Evaluation is achieved by considering each criterion and the weighting coefficients of each criterion in turn and applying the method of normalisation. This paper includes a description of this new approach and examples of its application in the development of new software products in the are of the water resources management.

  7. Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.; Connolly, Joseph W.

    2006-01-01

    The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.

  8. On the merging of optical and SAR satellite imagery for surface water mapping applications

    NASA Astrophysics Data System (ADS)

    Markert, Kel N.; Chishtie, Farrukh; Anderson, Eric R.; Saah, David; Griffin, Robert E.

    2018-06-01

    Optical and Synthetic Aperture Radar (SAR) imagery from satellite platforms provide a means to discretely map surface water; however, the application of the two data sources in tandem has been inhibited by inconsistent data availability, the distinct physical properties that optical and SAR instruments sense, and dissimilar data delivery platforms. In this paper, we describe a preliminary methodology for merging optical and SAR data into a common data space. We apply our approach over a portion of the Mekong Basin, a region with highly variable surface water cover and persistent cloud cover, for surface water applications requiring dense time series analysis. The methods include the derivation of a representative index from both sensors that transforms data from disparate physical units (reflectance and backscatter) to a comparable dimensionless space applying a consistent water extraction approach to both datasets. The merging of optical and SAR data allows for increased observations in cloud prone regions that can be used to gain additional insight into surface water dynamics or flood mapping applications. This preliminary methodology shows promise for a common optical-SAR water extraction; however, data ranges and thresholding values can vary depending on data source, yielding classification errors in the resulting surface water maps. We discuss some potential future approaches to address these inconsistencies.

  9. Stochastic error model corrections to improve the performance of bottom-up precipitation products for hydrologic applications

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.

    2016-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for

  10. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1998-01-01

    We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.

  11. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    USGS Publications Warehouse

    Topping, David J.; Wright, Scott A.

    2016-05-04

    It is commonly recognized that suspended-sediment concentrations in rivers can change rapidly in time and independently of water discharge during important sediment‑transporting events (for example, during floods); thus, suspended-sediment measurements at closely spaced time intervals are necessary to characterize suspended‑sediment loads. Because the manual collection of sufficient numbers of suspended-sediment samples required to characterize this variability is often time and cost prohibitive, several “surrogate” techniques have been developed for in situ measurements of properties related to suspended-sediment characteristics (for example, turbidity, laser-diffraction, acoustics). Herein, we present a new physically based method for the simultaneous measurement of suspended-silt-and-clay concentration, suspended-sand concentration, and suspended‑sand median grain size in rivers, using multi‑frequency arrays of single-frequency side‑looking acoustic-Doppler profilers. The method is strongly grounded in the extensive scientific literature on the incoherent scattering of sound by random suspensions of small particles. In particular, the method takes advantage of theory that relates acoustic frequency, acoustic attenuation, acoustic backscatter, suspended-sediment concentration, and suspended-sediment grain-size distribution. We develop the theory and methods, and demonstrate the application of the method at six study sites on the Colorado River and Rio Grande, where large numbers of suspended-sediment samples have been collected concurrently with acoustic attenuation and backscatter measurements over many years. The method produces acoustical measurements of suspended-silt-and-clay and suspended-sand concentration (in units of mg/L), and acoustical measurements of suspended-sand median grain size (in units of mm) that are generally in good to excellent agreement with concurrent physical measurements of these quantities in the river cross sections at

  12. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  13. An Efficient Silent Data Corruption Detection Method with Error-Feedback Control and Even Sampling for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Berrocal, Eduardo; Cappello, Franck

    The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less

  14. Recovery of chemical Estimates by Field Inhomogeneity Neighborhood Error Detection (REFINED): Fat/Water Separation at 7T

    PubMed Central

    Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.

    2012-01-01

    I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815

  15. Recovery of chemical estimates by field inhomogeneity neighborhood error detection (REFINED): fat/water separation at 7 tesla.

    PubMed

    Narayan, Sreenath; Kalhan, Satish C; Wilson, David L

    2013-05-01

    To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.

  16. Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…

  17. Preventing medical errors by designing benign failures.

    PubMed

    Grout, John R

    2003-07-01

    One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.

  18. Application of simple adaptive control to water hydraulic servo cylinder system

    NASA Astrophysics Data System (ADS)

    Ito, Kazuhisa; Yamada, Tsuyoshi; Ikeo, Shigeru; Takahashi, Koji

    2012-09-01

    Although conventional model reference adaptive control (MRAC) achieves good tracking performance for cylinder control, the controller structure is much more complicated and has less robustness to disturbance in real applications. This paper discusses the use of simple adaptive control (SAC) for positioning a water hydraulic servo cylinder system. Compared with MRAC, SAC has a simpler and lower order structure, i.e., higher feasibility. The control performance of SAC is examined and evaluated on a water hydraulic servo cylinder system. With the recent increased concerns over global environmental problems, the water hydraulic technique using pure tap water as a pressure medium has become a new drive source comparable to electric, oil hydraulic, and pneumatic drive systems. This technique is also preferred because of its high power density, high safety against fire hazards in production plants, and easy availability. However, the main problems for precise control in a water hydraulic system are steady state errors and overshoot due to its large friction torque and considerable leakage flow. MRAC has been already applied to compensate for these effects, and better control performances have been obtained. However, there have been no reports on the application of SAC for water hydraulics. To make clear the merits of SAC, the tracking control performance and robustness are discussed based on experimental results. SAC is confirmed to give better tracking performance compared with PI control, and a control precision comparable to MRAC (within 10 μm of the reference position) and higher robustness to parameter change, despite the simple controller. The research results ensure a wider application of simple adaptive control in real mechanical systems.

  19. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Water displacement leg volumetry in clinical studies - A discussion of error sources

    PubMed Central

    2010-01-01

    Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899

  1. The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System

    NASA Astrophysics Data System (ADS)

    Lin, M.

    2016-12-01

    Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line

  2. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  3. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  4. Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    PubMed Central

    Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323

  5. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  6. Foot placement during error and pedal applications in naturalistic driving.

    PubMed

    Wu, Yuqing; Boyle, Linda Ng; McGehee, Daniel; Roe, Cheryl A; Ebe, Kazutoshi; Foley, James

    2017-02-01

    Data from a naturalistic driving study was used to examine foot placement during routine foot pedal movements and possible pedal misapplications. The study included four weeks of observations from 30 drivers, where pedal responses were recorded and categorized. The foot movements associated with pedal misapplications and errors were the focus of the analyses. A random forest algorithm was used to predict the pedal application types based the video observations, foot placements, drivers' characteristics, drivers' cognitive function levels and anthropometric measurements. A repeated multinomial logit model was then used to estimate the likelihood of the foot placement given various driver characteristics and driving scenarios. The findings showed that prior foot location, the drivers' seat position, and the drive sequence were all associated with incorrect foot placement during an event. The study showed that there is a potential to develop a driver assistance system that can reduce the likelihood of a pedal error. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  8. Validation, Edits, and Application Processing Phase II and Error-Prone Model Report.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    The impact of quality assurance procedures on the correct award of Basic Educational Opportunity Grants (BEOGs) for 1979-1980 was assessed, and a model for detecting error-prone applications early in processing was developed. The Bureau of Student Financial Aid introduced new comments into the edit system in 1979 and expanded the pre-established…

  9. Error estimation in multitemporal InSAR deformation time series, with application to Lanzarote, Canary Islands

    NASA Astrophysics Data System (ADS)

    GonzáLez, Pablo J.; FernáNdez, José

    2011-10-01

    Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

  10. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  11. Application of the epidemiological model in studying human error in aviation

    NASA Technical Reports Server (NTRS)

    Cheaney, E. S.; Billings, C. E.

    1981-01-01

    An epidemiological model is described in conjunction with the analytical process through which aviation occurrence reports are composed into the events and factors pertinent to it. The model represents a process in which disease, emanating from environmental conditions, manifests itself in symptoms that may lead to fatal illness, recoverable illness, or no illness depending on individual circumstances of patient vulnerability, preventive actions, and intervention. In the aviation system the analogy of the disease process is the predilection for error of human participants. This arises from factors in the operating or physical environment and results in errors of commission or omission that, again depending on the individual circumstances, may lead to accidents, system perturbations, or harmless corrections. A discussion of the previous investigations, each of which manifests the application of the epidemiological method, exemplifies its use and effectiveness.

  12. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  13. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  14. Are water markets globally applicable?

    NASA Astrophysics Data System (ADS)

    Endo, Takahiro; Kakinuma, Kaoru; Yoshikawa, Sayaka; Kanae, Shinjiro

    2018-03-01

    Water scarcity is a global concern that necessitates a global perspective, but it is also the product of multiple regional issues that require regional solutions. Water markets constitute a regionally applicable non-structural measure to counter water scarcity that has received the attention of academics and policy-makers, but there is no global view on their applicability. We present the global distribution of potential nations and states where water markets could be instituted in a legal sense, by investigating 296 water laws internationally, with special reference to a minimum set of key rules: legalization of water reallocation, the separation of water rights and landownership, and the modification of the cancellation rule for non-use. We also suggest two additional globally distributed prerequisites and policy implications: the predictability of the available water before irrigation periods and public control of groundwater pumping throughout its jurisdiction.

  15. Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, D.W.; Forester, J.A.; Bley, D.C.

    1998-03-01

    A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method andmore » its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.« less

  16. Error Correcting Codes I. Applications of Elementary Algebra to Information Theory. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 346.

    ERIC Educational Resources Information Center

    Rice, Bart F.; Wilde, Carroll O.

    It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…

  17. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  18. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We

  19. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  20. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  1. Analysis of counting errors in the phase/Doppler particle analyzer

    NASA Technical Reports Server (NTRS)

    Oldenburg, John R.

    1987-01-01

    NASA is investigating the application of the Phase Doppler measurement technique to provide improved drop sizing and liquid water content measurements in icing research. The magnitude of counting errors were analyzed because these errors contribute to inaccurate liquid water content measurements. The Phase Doppler Particle Analyzer counting errors due to data transfer losses and coincidence losses were analyzed for data input rates from 10 samples/sec to 70,000 samples/sec. Coincidence losses were calculated by determining the Poisson probability of having more than one event occurring during the droplet signal time. The magnitude of the coincidence loss can be determined, and for less than a 15 percent loss, corrections can be made. The data transfer losses were estimated for representative data transfer rates. With direct memory access enabled, data transfer losses are less than 5 percent for input rates below 2000 samples/sec. With direct memory access disabled losses exceeded 20 percent at a rate of 50 samples/sec preventing accurate number density or mass flux measurements. The data transfer losses of a new signal processor were analyzed and found to be less than 1 percent for rates under 65,000 samples/sec.

  2. Cl app: android-based application program for monitoring the residue chlorine in water

    NASA Astrophysics Data System (ADS)

    Intaravanne, Yuttana; Sumriddetchkajorn, Sarun; Porntheeraphat, Supanit; Chaitavon, Kosom; Vuttivong, Sirajit

    2015-07-01

    A farmer usually uses a cheap chemical material called chlorine to destroy the cell structure of unwanted organisms and remove some plant effluents in a baby shrimp farm. A color changing of the reaction between chlorine and chemical indicator is used to monitor the residue chlorine in water before releasing a baby shrimp into a pond. To get rid of the error in color reading, our previous works showed how a smartphone can be functioned as a color reader for estimating the chlorine concentration in water. In this paper, we show the improvement of interior configuration of our prototype and the distribution to several baby shrimp farms. In the future, we plan to make it available worldwide through the online market as well as to develop more application programs for monitoring other chemical substances.

  3. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE PAGES

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-07-14

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and

  4. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and

  5. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the

  6. CCD image sensor induced error in PIV applications

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  7. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  8. The prediction of satellite ephemeris errors as they result from surveillance system measurement errors

    NASA Astrophysics Data System (ADS)

    Simmons, B. E.

    1981-08-01

    This report derives equations predicting satellite ephemeris error as a function of measurement errors of space-surveillance sensors. These equations lend themselves to rapid computation with modest computer resources. They are applicable over prediction times such that measurement errors, rather than uncertainties of atmospheric drag and of Earth shape, dominate in producing ephemeris error. This report describes the specialization of these equations underlying the ANSER computer program, SEEM (Satellite Ephemeris Error Model). The intent is that this report be of utility to users of SEEM for interpretive purposes, and to computer programmers who may need a mathematical point of departure for limited generalization of SEEM.

  9. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  10. Error measuring system of rotary Inductosyn

    NASA Astrophysics Data System (ADS)

    Liu, Chengjun; Zou, Jibin; Fu, Xinghe

    2008-10-01

    The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).

  11. A framework for human-hydrologic system model development integrating hydrology and water management: application to the Cutzamala water system in Mexico

    NASA Astrophysics Data System (ADS)

    Wi, S.; Freeman, S.; Brown, C.

    2017-12-01

    This study presents a general approach to developing computational models of human-hydrologic systems where human modification of hydrologic surface processes are significant or dominant. A river basin system is represented by a network of human-hydrologic response units (HHRUs) identified based on locations where river regulations happen (e.g., reservoir operation and diversions). Natural and human processes in HHRUs are simulated in a holistic framework that integrates component models representing rainfall-runoff, river routing, reservoir operation, flow diversion and water use processes. We illustrate the approach in a case study of the Cutzamala water system (CWS) in Mexico, a complex inter-basin water transfer system supplying the Mexico City Metropolitan Area (MCMA). The human-hydrologic system model for CWS (CUTZSIM) is evaluated in terms of streamflow and reservoir storages measured across the CWS and to water supplied for MCMA. The CUTZSIM improves the representation of hydrology and river-operation interaction and, in so doing, advances evaluation of system-wide water management consequences under altered climatic and demand regimes. The integrated modeling framework enables evaluation and simulation of model errors throughout the river basin, including errors in representation of the human component processes. Heretofore, model error evaluation, predictive error intervals and the resultant improved understanding have been limited to hydrologic processes. The general framework represents an initial step towards fuller understanding and prediction of the many and varied processes that determine the hydrologic fluxes and state variables in real river basins.

  12. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  13. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    PubMed

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  14. 40 CFR 227.31 - Applicable marine water quality criteria.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Applicable marine water quality... § 227.31 Applicable marine water quality criteria. Applicable marine water quality criteria means the criteria given for marine waters in the EPA publication “Quality Criteria for Water” as published in 1976...

  15. 40 CFR 227.31 - Applicable marine water quality criteria.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Applicable marine water quality... § 227.31 Applicable marine water quality criteria. Applicable marine water quality criteria means the criteria given for marine waters in the EPA publication “Quality Criteria for Water” as published in 1976...

  16. 40 CFR 227.31 - Applicable marine water quality criteria.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Applicable marine water quality... § 227.31 Applicable marine water quality criteria. Applicable marine water quality criteria means the criteria given for marine waters in the EPA publication “Quality Criteria for Water” as published in 1976...

  17. Error Propagation in a System Model

    NASA Technical Reports Server (NTRS)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  18. An integrated model for the assessment of global water resources Part 2: Applications and assessments

    NASA Astrophysics Data System (ADS)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.

    2008-07-01

    To assess global water resources from the perspective of subannual variation in water availability and water use, an integrated water resources model was developed. In a companion report, we presented the global meteorological forcing input used to drive the model and six modules, namely, the land surface hydrology module, the river routing module, the crop growth module, the reservoir operation module, the environmental flow requirement module, and the anthropogenic withdrawal module. Here, we present the results of the model application and global water resources assessments. First, the timing and volume of simulated agriculture water use were examined because agricultural use composes approximately 85% of total consumptive water withdrawal in the world. The estimated crop calendar showed good agreement with earlier reports for wheat, maize, and rice in major countries of production. In major countries, the error in the planting date was ±1 mo, but there were some exceptional cases. The estimated irrigation water withdrawal also showed fair agreement with country statistics, but tended to be underestimated in countries in the Asian monsoon region. The results indicate the validity of the model and the input meteorological forcing because site-specific parameter tuning was not used in the series of simulations. Finally, global water resources were assessed on a subannual basis using a newly devised index. This index located water-stressed regions that were undetected in earlier studies. These regions, which are indicated by a gap in the subannual distribution of water availability and water use, include the Sahel, the Asian monsoon region, and southern Africa. The simulation results show that the reservoir operations of major reservoirs (>1 km3) and the allocation of environmental flow requirements can alter the population under high water stress by approximately -11% to +5% globally. The integrated model is applicable to assessments of various global

  19. Application of digital profile modeling techniques to ground-water solute transport at Barstow, California

    USGS Publications Warehouse

    Robson, Stanley G.

    1978-01-01

    This study investigated the use of a two-dimensional profile-oriented water-quality model for the simulation of head and water-quality changes through the saturated thickness of an aquifer. The profile model is able to simulate confined or unconfined aquifers with nonhomogeneous anisotropic hydraulic conductivity, nonhomogeneous specific storage and porosity, and nonuniform saturated thickness. An aquifer may be simulated under either steady or nonsteady flow conditions provided that the ground-water flow path along which the longitudinal axis of the model is oriented does not move in the aquifer during the simulation time period. The profile model parameters are more difficult to quantify than are the corresponding parameters for an areal-oriented water-fluality model. However, the sensitivity of the profile model to the parameters may be such that the normal error of parameter estimation will not preclude obtaining acceptable model results. Although the profile model has the advantage of being able to simulate vertical flow and water-quality changes in a single- or multiple-aquifer system, the types of problems to which it can be applied is limited by the requirements that (1) the ground-water flow path remain oriented along the longitudinal axis of the model and (2) any subsequent hydrologic factors to be evaluated using the model must be located along the land-surface trace of the model. Simulation of hypothetical ground-water management practices indicates that the profile model is applicable to problem-oriented studies and can provide quantitative results applicable to a variety of management practices. In particular, simulations of the movement and dissolved-solids concentration of a zone of degraded ground-water quality near Barstow, Calif., indicate that halting subsurface disposal of treated sewage effluent in conjunction with pumping a line of fully penetrating wells would be an effective means of controlling the movement of degraded ground water.

  20. Effect of slope errors on the performance of mirrors for x-ray free electron laser applications

    DOE PAGES

    Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P.

    2015-12-02

    In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help tomore » correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.« less

  1. Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.

    PubMed

    Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P

    2015-12-14

    In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.

  2. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a

  3. Analytical calculation of electrolyte water content of a Proton Exchange Membrane Fuel Cell for on-board modelling applications

    NASA Astrophysics Data System (ADS)

    Ferrara, Alessandro; Polverino, Pierpaolo; Pianese, Cesare

    2018-06-01

    This paper proposes an analytical model of the water content of the electrolyte of a Proton Exchange Membrane Fuel Cell. The model is designed by accounting for several simplifying assumptions, which make the model suitable for on-board/online water management applications, while ensuring a good accuracy of the considered phenomena, with respect to advanced numerical solutions. The achieved analytical solution, expressing electrolyte water content, is compared with that obtained by means of a complex numerical approach, used to solve the same mathematical problem. The achieved results show that the mean error is below 5% for electrodes water content values ranging from 2 to 15 (given as boundary conditions), and it does not overcome 0.26% for electrodes water content above 5. These results prove the capability of the solution to correctly model electrolyte water content at any operating condition, aiming at embodiment into more complex frameworks (e.g., cell or stack models), related to fuel cell simulation, monitoring, control, diagnosis and prognosis.

  4. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS

  6. Chemical application strategies to protect water quality.

    PubMed

    Rice, Pamela J; Horgan, Brian P; Barber, Brian L; Koskinen, William C

    2018-07-30

    Management of turfgrass on golf courses and athletic fields often involves application of plant protection products to maintain or enhance turfgrass health and performance. However, the transport of fertilizer and pesticides with runoff to adjacent surface waters can enhance algal blooms, promote eutrophication and may have negative impacts on sensitive aquatic organisms and ecosystems. Thus, we evaluated the effectiveness of chemical application setbacks to reduce the off-site transport of chemicals with storm runoff. Experiments with water soluble tracer compounds confirmed an increase in application setback distance resulted in a significant increase in the volume of runoff measured before first off-site chemical detection, as well as a significant reduction in the total percentage of applied chemical transported with the storm runoff. For example, implementation of a 6.1 m application setback reduced the total percentage of an applied water soluble tracer by 43%, from 18.5% of applied to 10.5% of applied. Evaluation of chemographs revealed the efficacy of application setbacks could be observed with storms resulting in lesser (e.g. 100 L) and greater (e.g. > 300 L) quantities of runoff. Application setbacks offer turfgrass managers a mitigation approach that requires no additional resources or time inputs and may serve as an alternative practice when buffers are less appropriate for land management objectives or site conditions. Characterizing potential contamination of surface waters and developing strategies to safeguard water quality will help protect the environment and improve water resource security. This information is useful to grounds superintendents for designing chemical application strategies to maximize environmental stewardship. The data will also be useful to scientists and regulators working with chemical transport and risk models. Copyright © 2018. Published by Elsevier Inc.

  7. Measurement of tokamak error fields using plasma response and its applicability to ITER

    DOE PAGES

    Strait, Edward J.; Buttery, Richard J.; Casper, T. A.; ...

    2014-04-17

    The nonlinear response of a low-beta tokamak plasma to non-axisymmetric fields offers an alternative to direct measurement of the non-axisymmetric part of the vacuum magnetic fields, often termed “error fields”. Possible approaches are discussed for determination of error fields and the required current in non-axisymmetric correction coils, with an emphasis on two relatively new methods: measurement of the torque balance on a saturated magnetic island, and measurement of the braking of plasma rotation in the absence of an island. The former is well suited to ohmically heated discharges, while the latter is more appropriate for discharges with a modest amountmore » of neutral beam heating to drive rotation. Both can potentially provide continuous measurements during a discharge, subject to the limitation of a minimum averaging time. The applicability of these methods to ITER is discussed, and an estimate is made of their uncertainties in light of the specifications of ITER’s diagnostic systems. Furthermore, the use of plasma response-based techniques in normal ITER operational scenarios may allow identification of the error field contributions by individual central solenoid coils, but identification of the individual contributions by the outer poloidal field coils or other sources is less likely to be feasible.« less

  8. 78 FR 20252 - Water Quality Standards; Withdrawal of Certain Federal Water Quality Criteria Applicable to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-04

    ... aquatic life water quality criteria applicable to waters of New Jersey, Puerto Rico, and California's San Francisco Bay. In 1992, EPA promulgated the National Toxics Rule or NTR to establish numeric water quality... Water Quality Standards; Withdrawal of Certain Federal Water Quality Criteria Applicable to California...

  9. The importance of robust error control in data compression applications

    NASA Technical Reports Server (NTRS)

    Woolley, S. I.

    1993-01-01

    Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.

  10. Water resources by orbital remote sensing: Examples of applications

    NASA Technical Reports Server (NTRS)

    Martini, P. R. (Principal Investigator)

    1984-01-01

    Selected applications of orbital remote sensing to water resources undertaken by INPE are described. General specifications of Earth application satellites and technical characteristics of LANDSAT 1, 2, 3, and 4 subsystems are described. Spatial, temporal and spectral image attributes of water as well as methods of image analysis for applications to water resources are discussed. Selected examples are referred to flood monitoring, analysis of water suspended sediments, spatial distribution of pollutants, inventory of surface water bodies and mapping of alluvial aquifers.

  11. Crop modeling applications in agricultural water management

    USGS Publications Warehouse

    Kisekka, Isaya; DeJonge, Kendall C.; Ma, Liwang; Paz, Joel; Douglas-Mankin, Kyle R.

    2017-01-01

    This article introduces the fourteen articles that comprise the “Crop Modeling and Decision Support for Optimizing Use of Limited Water” collection. This collection was developed from a special session on crop modeling applications in agricultural water management held at the 2016 ASABE Annual International Meeting (AIM) in Orlando, Florida. In addition, other authors who were not able to attend the 2016 ASABE AIM were also invited to submit papers. The articles summarized in this introductory article demonstrate a wide array of applications in which crop models can be used to optimize agricultural water management. The following section titles indicate the topics covered in this collection: (1) evapotranspiration modeling (one article), (2) model development and parameterization (two articles), (3) application of crop models for irrigation scheduling (five articles), (4) coordinated water and nutrient management (one article), (5) soil water management (two articles), (6) risk assessment of water-limited irrigation management (one article), and (7) regional assessments of climate impact (two articles). Changing weather and climate, increasing population, and groundwater depletion will continue to stimulate innovations in agricultural water management, and crop models will play an important role in helping to optimize water use in agriculture.

  12. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  13. NASA Data for Water Resources Applications

    NASA Technical Reports Server (NTRS)

    Toll, David; Houser, Paul; Arsenault, Kristi; Entin, Jared

    2004-01-01

    Water Management Applications is one of twelve elements in the Earth Science Enterprise National Applications Program. NASA Goddard Space Flight Center is supporting the Applications Program through partnering with other organizations to use NASA project results, such as from satellite instruments and Earth system models to enhance the organizations critical needs. The focus thus far has been: 1) estimating water storage including snowpack and soil moisture, 2) modeling and predicting water fluxes such as evapotranspiration (ET), precipitation and river runoff, and 3) remote sensing of water quality, including both point source (e.g., turbidity and productivity) and non-point source (e.g., land cover conversion such as forest to agriculture yielding higher nutrient runoff). The objectives of the partnering cover three steps of: 1) Evaluation, 2) Verification and Validation, and 3) Benchmark Report. We are working with the U.S. federal agencies including the Environmental Protection Agency (EPA), the Bureau of Reclamation (USBR) and the Department of Agriculture (USDA). We are using several of their Decision Support Systems (DSS) tools. This includes the DSS support tools BASINS used by EPA, Riverware and AWARDS ET ToolBox by USBR and SWAT by USDA and EPA. Regional application sites using NASA data across the US. are currently being eliminated for the DSS tools. The current NASA data emphasized thus far are from the Land Data Assimilation Systems WAS) and MODIS satellite products. We are currently in the first two steps of evaluation and verification validation. Water Management Applications is one of twelve elements in the Earth Science Enterprise s National Applications Program. NASA Goddard Space Flight Center is supporting the Applications Program through partnering with other organizations to use NASA project results, such as from satellite instruments and Earth system models to enhance the organizations critical needs. The focus thus far has been: 1

  14. An Investigation into Soft Error Detection Efficiency at Operating System Level

    PubMed Central

    Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894

  15. An investigation into soft error detection efficiency at operating system level.

    PubMed

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  16. Quasi-eccentricity error modeling and compensation in vision metrology

    NASA Astrophysics Data System (ADS)

    Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin

    2018-04-01

    Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.

  17. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  18. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  19. Automated River Reach Definition Strategies: Applications for the Surface Water and Ocean Topography Mission

    NASA Astrophysics Data System (ADS)

    Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André

    2017-10-01

    The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.

  20. Application of human reliability analysis to nursing errors in hospitals.

    PubMed

    Inoue, Kayoko; Koizumi, Akio

    2004-12-01

    Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical

  1. The Greenwich Photo-heliographic Results (1874 - 1976): Summary of the Observations, Applications, Datasets, Definitions and Errors

    NASA Astrophysics Data System (ADS)

    Willis, D. M.; Coffey, H. E.; Henwood, R.; Erwin, E. H.; Hoyt, D. V.; Wild, M. N.; Denig, W. F.

    2013-11-01

    The measurements of sunspot positions and areas that were published initially by the Royal Observatory, Greenwich, and subsequently by the Royal Greenwich Observatory (RGO), as the Greenwich Photo-heliographic Results ( GPR), 1874 - 1976, exist in both printed and digital forms. These printed and digital sunspot datasets have been archived in various libraries and data centres. Unfortunately, however, typographic, systematic and isolated errors can be found in the various datasets. The purpose of the present paper is to begin the task of identifying and correcting these errors. In particular, the intention is to provide in one foundational paper all the necessary background information on the original solar observations, their various applications in scientific research, the format of the different digital datasets, the necessary definitions of the quantities measured, and the initial identification of errors in both the printed publications and the digital datasets. Two companion papers address the question of specific identifiable errors; namely, typographic errors in the printed publications, and both isolated and systematic errors in the digital datasets. The existence of two independently prepared digital datasets, which both contain information on sunspot positions and areas, makes it possible to outline a preliminary strategy for the development of an even more accurate digital dataset. Further work is in progress to generate an extremely reliable sunspot digital dataset, based on the programme of solar observations supported for more than a century by the Royal Observatory, Greenwich, and the Royal Greenwich Observatory. This improved dataset should be of value in many future scientific investigations.

  2. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  3. The Error Reporting in the ATLAS TDAQ System

    NASA Astrophysics Data System (ADS)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one

  4. Velocity- and pointing-error measurements of a 300 000-r/min self-bearing permanent-magnet motor for optical applications

    NASA Astrophysics Data System (ADS)

    Breitkopf, Sven; Lilienfein, Nikolai; Achtnich, Timon; Zwyssig, Christof; Tünnermann, Andreas; Pupeza, Ioachim; Limpert, Jens

    2018-06-01

    Compact, ultra-high-speed self-bearing permanent-magnet motors enable a wide scope of applications including an increasing number of optical ones. For implementation in an optical setup, the rotors have to satisfy high demands regarding their velocity and pointing errors. Only a restricted number of measurements of these parameters exist and only at relatively low velocities. This manuscript presents the measurement of the velocity and pointing errors at rotation frequencies up to 5 kHz. The acquired data allow us to identify the rotor drive as the main source of velocity variations with fast fluctuations of up to 3.4 ns (RMS) and slow drifts of 23 ns (RMS) over ˜120 revolutions at 5 kHz in vacuum. At the same rotation frequency, the pointing fluctuated by 12 μrad (RMS) and 33 μrad (peak-to-peak) over ˜10 000 round trips. To our best knowledge, this states the first measurement of velocity and pointing errors at multi-kHz rotation frequencies and will allow potential adopters to evaluate the feasibility of such rotor drives for their application.

  5. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  6. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  7. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  8. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  9. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  10. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  11. Application of borehole geophysics to water-resources investigations

    USGS Publications Warehouse

    Keys, W.S.; MacCary, L.M.

    1971-01-01

    This manual is intended to be a guide for hydrologists using borehole geophysics in ground-water studies. The emphasis is on the application and interpretation of geophysical well logs, and not on the operation of a logger. It describes in detail those logging techniques that have been utilized within the Water Resources Division of the U.S. Geological Survey, and those used in petroleum investigations that have potential application to hydrologic problems. Most of the logs described can be made by commercial logging service companies, and many can be made with small water-well loggers. The general principles of each technique and the rules of log interpretation are the same, regardless of differences in instrumentation. Geophysical well logs can be interpreted to determine the lithology, geometry, resistivity, formation factor, bulk density, porosity, permeability, moisture content, and specific yield of water-bearing rocks, and to define the source, movement, and chemical and physical characteristics of ground water. Numerous examples of logs are used to illustrate applications and interpretation in various ground-water environments. The interrelations between various types of logs are emphasized, and the following aspects are described for each of the important logging techniques: Principles and applications, instrumentation, calibration and standardization, radius of investigation, and extraneous effects.

  12. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    NASA Astrophysics Data System (ADS)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  13. Application of remote sensing to water resources problems

    NASA Technical Reports Server (NTRS)

    Clapp, J. L.

    1972-01-01

    The following conclusions were reached concerning the applications of remote sensing to water resources problems: (1) Remote sensing methods provide the most practical method of obtaining data for many water resources problems; (2) the multi-disciplinary approach is essential to the effective application of remote sensing to water resource problems; (3) there is a correlation between the amount of suspended solids in an effluent discharged into a water body and reflected energy; (4) remote sensing provides for more effective and accurate monitoring, discovery and characterization of the mixing zone of effluent discharged into a receiving water body; and (5) it is possible to differentiate between blue and blue-green algae.

  14. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  15. Autonomous Quantum Error Correction with Application to Quantum Metrology

    NASA Astrophysics Data System (ADS)

    Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.

    2017-04-01

    We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  16. 40 CFR 227.31 - Applicable marine water quality criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Applicable marine water quality criteria. 227.31 Section 227.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING CRITERIA FOR THE EVALUATION OF PERMIT APPLICATIONS FOR OCEAN DUMPING OF MATERIALS Definitions § 227.31 Applicable marine water quality...

  17. 40 CFR 227.31 - Applicable marine water quality criteria.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Applicable marine water quality criteria. 227.31 Section 227.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING CRITERIA FOR THE EVALUATION OF PERMIT APPLICATIONS FOR OCEAN DUMPING OF MATERIALS Definitions § 227.31 Applicable marine water quality...

  18. 7 CFR 612.6 - Application for water supply forecast service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Application for water supply forecast service. 612.6... CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE CONSERVATION OPERATIONS SNOW SURVEYS AND WATER SUPPLY FORECASTS § 612.6 Application for water supply forecast service. Requests for obtaining water supply forecasts or...

  19. Error Mitigation for Short-Depth Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  20. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  1. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  2. Error Correction: A Cognitive-Affective Stance

    ERIC Educational Resources Information Center

    Saeed, Aziz Thabit

    2007-01-01

    This paper investigates the application of some of the most frequently used writing error correction techniques to see the extent to which this application takes learners' cognitive and affective characteristics into account. After showing how unlearned application of these styles could be discouraging and/or damaging to students, the paper…

  3. Correction of phase errors in quantitative water-fat imaging using a monopolar time-interleaved multi-echo gradient echo sequence.

    PubMed

    Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Diefenbach, Maximilian N; Baum, Thomas; Haase, Axel; Rummeny, Ernst J; Hu, Houchun H; Karampinos, Dimitrios C

    2017-09-01

    To propose a phase error correction scheme for monopolar time-interleaved multi-echo gradient echo water-fat imaging that allows accurate and robust complex-based quantification of the proton density fat fraction (PDFF). A three-step phase correction scheme is proposed to address a) a phase term induced by echo misalignments that can be measured with a reference scan using reversed readout polarity, b) a phase term induced by the concomitant gradient field that can be predicted from the gradient waveforms, and c) a phase offset between time-interleaved echo trains. Simulations were carried out to characterize the concomitant gradient field-induced PDFF bias and the performance estimating the phase offset between time-interleaved echo trains. Phantom experiments and in vivo liver and thigh imaging were performed to study the relevance of each of the three phase correction steps on PDFF accuracy and robustness. The simulation, phantom, and in vivo results showed in agreement with the theory an echo time-dependent PDFF bias introduced by the three phase error sources. The proposed phase correction scheme was found to provide accurate PDFF estimation independent of the employed echo time combination. Complex-based time-interleaved water-fat imaging was found to give accurate and robust PDFF measurements after applying the proposed phase error correction scheme. Magn Reson Med 78:984-996, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  5. The application of Aronson's taxonomy to medication errors in nursing.

    PubMed

    Johnson, Maree; Young, Helen

    2011-01-01

    Medication administration is a frequent nursing activity that is prone to error. In this study of 318 self-reported medication incidents (including near misses), very few resulted in patient harm-7% required intervention or prolonged hospitalization or caused temporary harm. Aronson's classification system provided an excellent framework for analysis of the incidents with a close connection between the type of error and the change strategy to minimize medication incidents. Taking a behavioral approach to medication error classification has provided helpful strategies for nurses such as nurse-call cards on patient lockers when patients are absent and checking of medication sign-off by outgoing and incoming staff at handover.

  6. Error-related brain activity and error awareness in an error classification paradigm.

    PubMed

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  8. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  9. ATM QoS Experiments Using TCP Applications: Performance of TCP/IP Over ATM in a Variety of Errored Links

    NASA Technical Reports Server (NTRS)

    Frantz, Brian D.; Ivancic, William D.

    2001-01-01

    Asynchronous Transfer Mode (ATM) Quality of Service (QoS) experiments using the Transmission Control Protocol/Internet Protocol (TCP/IP) were performed for various link delays. The link delay was set to emulate a Wide Area Network (WAN) and a Satellite Link. The purpose of these experiments was to evaluate the ATM QoS requirements for applications that utilize advance TCP/IP protocols implemented with large windows and Selective ACKnowledgements (SACK). The effects of cell error, cell loss, and random bit errors on throughput were reported. The detailed test plan and test results are presented herein.

  10. Impact of artificial monolayer application on stored water quality at the air-water interface.

    PubMed

    Pittaway, P; Martínez-Alvarez, V; Hancock, N; Gallego-Elvira, B

    2015-01-01

    Evaporation mitigation has the potential to significantly improve water use efficiency, with repeat applications of artificial monolayer formulations the most cost-effective strategy for large water storages. Field investigations of the impact of artificial monolayers on water quality have been limited by wind and wave turbulence, and beaching. Two suspended covers differing in permeability to wind and light were used to attenuate wind turbulence, to favour the maintenance of a condensed monolayer at the air/water interface of a 10 m diameter tank. An octadecanol formulation was applied twice-weekly to one of two covered tanks, while a third clean water tank remained uncovered for the 14-week duration of the trial. Microlayer and subsurface water samples were extracted once a week to distinguish impacts associated with the installation of covers, from the impact of prolonged monolayer application. The monolayer was selectively toxic to some phytoplankton, but the toxicity of hydrocarbons leaching from a replacement liner had a greater impact. Monolayer application did not increase water temperature, humified dissolved organic matter, or the biochemical oxygen demand, and did not reduce dissolved oxygen. The impact of an octadecanol monolayer on water quality and the microlayer may not be as detrimental as previously considered.

  11. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  12. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Using surface water application to reduce 1,3-dichloropropene emission from soil fumigation.

    PubMed

    Gao, Suduan; Trout, Thomas J

    2006-01-01

    High emissions from soil fumigants increase the risk of detrimental impact on workers, bystanders, and the environment, and jeopardize future availability of fumigants. Efficient and cost-effective approaches to minimize emissions are needed. This study evaluated the potential of surface water application (or water seal) to reduce 1,3-dichloropropene (1,3-D) emissions from soil (Hanford sandy loam) columns. Treatments included dry soil (control), initial water application (8 mm of water just before fumigant application), initial plus a second water application (2.6 mm) at 12 h, initial plus two water applications (2.6 mm each time) at 12 and 24 h, standard high density polyethylene (HDPE) tarp, initial water application plus HDPE tarp, and virtually impermeable film (VIF) tarp. Emissions from the soil surface and distribution of 1,3-D in the soil-gas phase were monitored for 2 wk. Each water application abruptly reduced 1,3-D emission flux, which rebounded over a few hours. Peak emission rates were substantially reduced, but total emission reduction was small. Total fumigant emission was 51% of applied for the control, 46% for initial water application only, and 41% for the three intermittent water applications with the remaining water treatment intermediate. The HDPE tarp alone resulted in 45% emission, while initial water application plus HDPE tarp resulted in 38% emission. The most effective soil surface treatment was VIF tarp (10% emission). Surface water application can be as effective, and less expensive than, standard HDPE tarp. Frequent water application is required to substantially reduce emissions.

  14. Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).

    PubMed

    Wiegmann, D A; Shappell, S A

    2001-11-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.

  15. Deep sequencing of evolving pathogen populations: applications, errors, and bioinformatic solutions

    PubMed Central

    2014-01-01

    Deep sequencing harnesses the high throughput nature of next generation sequencing technologies to generate population samples, treating information contained in individual reads as meaningful. Here, we review applications of deep sequencing to pathogen evolution. Pioneering deep sequencing studies from the virology literature are discussed, such as whole genome Roche-454 sequencing analyses of the dynamics of the rapidly mutating pathogens hepatitis C virus and HIV. Extension of the deep sequencing approach to bacterial populations is then discussed, including the impacts of emerging sequencing technologies. While it is clear that deep sequencing has unprecedented potential for assessing the genetic structure and evolutionary history of pathogen populations, bioinformatic challenges remain. We summarise current approaches to overcoming these challenges, in particular methods for detecting low frequency variants in the context of sequencing error and reconstructing individual haplotypes from short reads. PMID:24428920

  16. Benefit transfer and spatial heterogeneity of preferences for water quality improvements.

    PubMed

    Martin-Ortega, J; Brouwer, R; Ojea, E; Berbel, J

    2012-09-15

    The improvement in the water quality resulting from the implementation of the EU Water Framework Directive is expected to generate substantial non-market benefits. A wide spread estimation of these benefits across Europe will require the application of benefit transfer. We use a spatially explicit valuation design to account for the spatial heterogeneity of preferences to help generate lower transfer errors. A map-based choice experiment is applied in the Guadalquivir River Basin (Spain), accounting simultaneously for the spatial distribution of water quality improvements and beneficiaries. Our results show that accounting for the spatial heterogeneity of preferences generally produces lower transfer errors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Assessing stand water use in four coastal wetland forests using sapflow techniques: annual estimates, errors and associated uncertainties

    USGS Publications Warehouse

    Krauss, Ken W.; Duberstein, Jamie A.; Conner, William H.

    2015-01-01

    Forests comprise approximately 37% of the terrestrial land surface and influence global water cycling. However, very little attention has been directed towards understanding environmental impacts on stand water use (S) or in identifying rates of S from specific forested wetlands. Here, we use sapflow techniques to address two separate but linked objectives: (1) determine S in four, hydrologically distinctive South Carolina (USA) wetland forests from 2009–2010 and (2) describe potential error, uncertainty and stand-level variation associated with these assessments. Sapflow measurements were made from a number of tree species for approximately 2–8 months over 2 years to initiate the model, which was applied to canopy trees (DBH > 10–20 cm). We determined that S in three healthy forested wetlands varied from 1.97–3.97 mm day−1 or 355–687 mm year−1 when scaled. In contrast, saltwater intrusion impacted individual tree physiology and size class distributions on a fourth site, which decreased S to 0.61–1.13 mm day−1 or 110–196 mm year−1. The primary sources of error in estimations using sapflow probes would relate to calibration of probes and standardization relative to no flow periods and accounting for accurate sapflow attenuation with radial depth into the sapwood by species and site. Such inherent variation in water use among wetland forest stands makes small differences in S (<200 mm year−1) difficult to detect statistically through modelling, even though small differences may be important to local water cycling. These data also represent some of the first assessments of S from temperate, coastal forested wetlands along the Atlantic coast of the USA.

  18. Application of parameter estimation to aircraft stability and control: The output-error approach

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.; Iliff, Kenneth W.

    1986-01-01

    The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.

  19. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  20. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  1. Nanofiltration technology in water treatment and reuse: applications and costs.

    PubMed

    Shahmansouri, Arash; Bellona, Christopher

    2015-01-01

    Nanofiltration (NF) is a relatively recent development in membrane technology with characteristics that fall between ultrafiltration and reverse osmosis (RO). While RO membranes dominate the seawater desalination industry, NF is employed in a variety of water and wastewater treatment and industrial applications for the selective removal of ions and organic substances, as well as certain niche seawater desalination applications. The purpose of this study was to review the application of NF membranes in the water and wastewater industry including water softening and color removal, industrial wastewater treatment, water reuse, and desalination. Basic economic analyses were also performed to compare the profitability of using NF membranes over alternative processes. Although any detailed cost estimation is hampered by some uncertainty (e.g. applicability of estimation methods to large-scale systems, labor costs in different areas of the world), NF was found to be a cost-effective technology for certain investigated applications. The selection of NF over other treatment technologies, however, is dependent on several factors including pretreatment requirements, influent water quality, treatment facility capacity, and treatment goals.

  2. Understanding diagnostic errors in medicine: a lesson from aviation

    PubMed Central

    Singh, H; Petersen, L A; Thomas, E J

    2006-01-01

    The impact of diagnostic errors on patient safety in medicine is increasingly being recognized. Despite the current progress in patient safety research, the understanding of such errors and how to prevent them is inadequate. Preliminary research suggests that diagnostic errors have both cognitive and systems origins. Situational awareness is a model that is primarily used in aviation human factors research that can encompass both the cognitive and the systems roots of such errors. This conceptual model offers a unique perspective in the study of diagnostic errors. The applicability of this model is illustrated by the analysis of a patient whose diagnosis of spinal cord compression was substantially delayed. We suggest how the application of this framework could lead to potential areas of intervention and outline some areas of future research. It is possible that the use of such a model in medicine could help reduce errors in diagnosis and lead to significant improvements in patient care. Further research is needed, including the measurement of situational awareness and correlation with health outcomes. PMID:16751463

  3. Water Management Applications of Advanced Precipitation Products

    NASA Astrophysics Data System (ADS)

    Johnson, L. E.; Braswell, G.; Delaney, C.

    2012-12-01

    Advanced precipitation sensors and numerical models track storms as they occur and forecast the likelihood of heavy rain for time frames ranging from 1 to 8 hours, 1 day, and extended outlooks out to 3 to 7 days. Forecast skill decreases at the extended time frames but the outlooks have been shown to provide "situational awareness" which aids in preparation for flood mitigation and water supply operations. In California the California-Nevada River Forecast Centers and local Weather Forecast Offices provide precipitation products that are widely used to support water management and flood response activities of various kinds. The Hydrometeorology Testbed (HMT) program is being conducted to help advance the science of precipitation tracking and forecasting in support of the NWS. HMT high-resolution products have found applications for other non-federal water management activities as well. This presentation will describe water management applications of HMT advanced precipitation products, and characterization of benefits expected to accrue. Two case examples will be highlighted, 1) reservoir operations for flood control and water supply, and 2) urban stormwater management. Application of advanced precipitation products in support of reservoir operations is a focus of the Sonoma County Water Agency. Examples include: a) interfacing the high-resolution QPE products with a distributed hydrologic model for the Russian-Napa watersheds, b) providing early warning of in-coming storms for flood preparedness and water supply storage operations. For the stormwater case, San Francisco wastewater engineers are developing a plan to deploy high resolution gap-filling radars looking off shore to obtain longer lead times on approaching storms. A 4 to 8 hour lead time would provide opportunity to optimize stormwater capture and treatment operations, and minimize combined sewer overflows into the Bay.ussian River distributed hydrologic model.

  4. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge

  5. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  6. Monograph for using paleoflood data in Water Resources Applications

    USGS Publications Warehouse

    Swain, R.E.; Jarrett, R.D.

    2004-01-01

    The Environmental and Water Resources Institute (EWRI) Technical Committee on Surface Water Hydrology is sponsoring a Task Committee on Paleoflood Hydrology to prepare a monograph entitled, "Use of Paleoflood and Historical Data in Water Resources Applications." This paper introduces the subject of paleoflood hydrology and discusses the topics, which are expected to be included in the monograph. The procedure for preparing and reviewing the monograph will also be discussed. The paleoflood hydrology monograph will include a discussion of types of hydrologic and paleoflood data, paleostage indicators, flood chronology, modeling methods, interpretation issues, water resources applications and case studies, and research needs. Paleoflood data collection and analysis techniques will be presented, and various applications in water-resources investigations will be provided. An overview of several flood frequency analysis approaches, which consider historical and paleoflood data along with systematic streamflow records, will be presented. The monograph is scheduled for completion and publication in 2001. Copyright ASCE 2004.

  7. Wind wave prediction in shallow water: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavaleri, L.; Rizzoli, P.M.

    1981-11-20

    A wind wave forecasting model is described, based upon the ray technique, which is specifically designed for shallow water areas. The model explicitly includes wave generation, refraction, and shoaling, while nonlinear dissipative processes (breaking and bottom fricton) are introduced through a suitable parametrization. The forecast is provided at a specified time and target position, in terms of a directional spectrum, from which the one-dimensional spectrum and the significant wave height are derived. The model has been used to hindcast storms both in shallow water (Northern Adriatic Sea) and in deep water conditions (Tyrrhenian Sea). The results have been compared withmore » local measurements, and the rms error for the significant wave height is between 10 and 20%. A major problems has been found in the correct evaluation of the wind field.« less

  8. Representing radar rainfall uncertainty with ensembles based on a time-variant geostatistical error modelling approach

    NASA Astrophysics Data System (ADS)

    Cecinati, Francesca; Rico-Ramirez, Miguel Angel; Heuvelink, Gerard B. M.; Han, Dawei

    2017-05-01

    The application of radar quantitative precipitation estimation (QPE) to hydrology and water quality models can be preferred to interpolated rainfall point measurements because of the wide coverage that radars can provide, together with a good spatio-temporal resolutions. Nonetheless, it is often limited by the proneness of radar QPE to a multitude of errors. Although radar errors have been widely studied and techniques have been developed to correct most of them, residual errors are still intrinsic in radar QPE. An estimation of uncertainty of radar QPE and an assessment of uncertainty propagation in modelling applications is important to quantify the relative importance of the uncertainty associated to radar rainfall input in the overall modelling uncertainty. A suitable tool for this purpose is the generation of radar rainfall ensembles. An ensemble is the representation of the rainfall field and its uncertainty through a collection of possible alternative rainfall fields, produced according to the observed errors, their spatial characteristics, and their probability distribution. The errors are derived from a comparison between radar QPE and ground point measurements. The novelty of the proposed ensemble generator is that it is based on a geostatistical approach that assures a fast and robust generation of synthetic error fields, based on the time-variant characteristics of errors. The method is developed to meet the requirement of operational applications to large datasets. The method is applied to a case study in Northern England, using the UK Met Office NIMROD radar composites at 1 km resolution and at 1 h accumulation on an area of 180 km by 180 km. The errors are estimated using a network of 199 tipping bucket rain gauges from the Environment Agency. 183 of the rain gauges are used for the error modelling, while 16 are kept apart for validation. The validation is done by comparing the radar rainfall ensemble with the values recorded by the validation rain

  9. Domestic applications for aerospace waste and water management technologies

    NASA Technical Reports Server (NTRS)

    Disanto, F.; Murray, R. W.

    1972-01-01

    Some of the aerospace developments in solid waste disposal and water purification, which are applicable to specific domestic problems are explored. Also provided is an overview of the management techniques used in defining the need, in utilizing the available tools, and in synthesizing a solution. Specifically, several water recovery processes will be compared for domestic applicability. Examples are filtration, distillation, catalytic oxidation, reverse osmosis, and electrodialysis. Solid disposal methods will be discussed, including chemical treatment, drying, incineration, and wet oxidation. The latest developments in reducing household water requirements and some concepts for reusing water will be outlined.

  10. Activity Tracking for Pilot Error Detection from Flight Data

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Ashford, Rose (Technical Monitor)

    2002-01-01

    This report presents an application of activity tracking for pilot error detection from flight data, and describes issues surrounding such an application. It first describes the Crew Activity Tracking System (CATS), in-flight data collected from the NASA Langley Boeing 757 Airborne Research Integrated Experiment System aircraft, and a model of B757 flight crew activities. It then presents an example of CATS detecting actual in-flight crew errors.

  11. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  12. Error-rate prediction for programmable circuits: methodology, tools and studied cases

    NASA Astrophysics Data System (ADS)

    Velazco, Raoul

    2013-05-01

    This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).

  13. Application of graphene oxide in water treatment

    NASA Astrophysics Data System (ADS)

    Liu, Yongchen

    2017-11-01

    Graphene oxide has good hydrophilicity and has been tried to use it into thin films for water treatment in recent years. In this paper, the preparation methods of graphene oxide membrane are reviewed, including vacuum suction filtration, spray coating, spin coating, dip coating and the layer by layer method. Secondly, the mechanism of mass transfer of graphene membrane is introduced in detail. The application of the graphene oxide membrane, modified graphene oxide membrane and graphene hybrid membranes were discussed in RO, vaporization, nanofiltration and other aspects. Finally, the development and application of graphene membrane in water treatment were discussed.

  14. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  15. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  17. Predictive accuracy of a ground-water model--Lessons from a postaudit

    USGS Publications Warehouse

    Konikow, Leonard F.

    1986-01-01

    Hydrogeologic studies commonly include the development, calibration, and application of a deterministic simulation model. To help assess the value of using such models to make predictions, a postaudit was conducted on a previously studied area in the Salt River and lower Santa Cruz River basins in central Arizona. A deterministic, distributed-parameter model of the ground-water system in these alluvial basins was calibrated by Anderson (1968) using about 40 years of data (1923–64). The calibrated model was then used to predict future water-level changes during the next 10 years (1965–74). Examination of actual water-level changes in 77 wells from 1965–74 indicates a poor correlation between observed and predicted water-level changes. The differences have a mean of 73 ft that is, predicted declines consistently exceeded those observed and a standard deviation of 47 ft. The bias in the predicted water-level change can be accounted for by the large error in the assumed total pumpage during the prediction period. However, the spatial distribution of errors in predicted water-level change does not correlate with the spatial distribution of errors in pumpage. Consequently, the lack of precision probably is not related only to errors in assumed pumpage, but may indicate the presence of other sources of error in the model, such as the two-dimensional representation of a three-dimensional problem or the lack of consideration of land-subsidence processes. This type of postaudit is a valuable method of verifying a model, and an evaluation of predictive errors can provide an increased understanding of the system and aid in assessing the value of undertaking development of a revised model.

  18. SEU induced errors observed in microprocessor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asenek, V.; Underwood, C.; Oldfield, M.

    In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.

  19. Assessing Variability and Errors in Historical Runoff Forecasting with Physical Models and Alternative Data Sources

    NASA Astrophysics Data System (ADS)

    Penn, C. A.; Clow, D. W.; Sexstone, G. A.

    2017-12-01

    Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a

  20. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  1. Evaluation of super-water reducers for highway applications

    NASA Astrophysics Data System (ADS)

    Whiting, D.

    1981-03-01

    Super-water reducers were characterized and evaluated as potential candidates for production of low water to cement ratio, high strength concretes for highway construction applications. Admixtures were composed of either naphthalene or melamine sulfonated formaldehyde condensates. A mini-slump procedure was used to assess dosage requirements and behavior of workability with time of cement pastes. Required dosage was found to be a function of tricalcium aluminate content, alkali content, and fineness of the cement. Concretes exhibited high rates of slump loss when super-water reducers were used. The most promising area of application of these products appears to be in production of dense, high cement content concrete using mobile concrete mixer/transporters.

  2. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  3. Calibration and application of an automated seepage meter for monitoring water flow across the sediment-water interface.

    PubMed

    Zhu, Tengyi; Fu, Dafang; Jenkinson, Byron; Jafvert, Chad T

    2015-04-01

    The advective flow of sediment pore water is an important parameter for understanding natural geochemical processes within lake, river, wetland, and marine sediments and also for properly designing permeable remedial sediment caps placed over contaminated sediments. Automated heat pulse seepage meters can be used to measure the vertical component of sediment pore water flow (i.e., vertical Darcy velocity); however, little information on meter calibration as a function of ambient water temperature exists in the literature. As a result, a method with associated equations for calibrating a heat pulse seepage meter as a function of ambient water temperature is fully described in this paper. Results of meter calibration over the temperature range 7.5 to 21.2 °C indicate that errors in accuracy are significant if proper temperature-dependence calibration is not performed. The proposed calibration method allows for temperature corrections to be made automatically in the field at any ambient water temperature. The significance of these corrections is discussed.

  4. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1997-01-01

    We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a

  5. Toward the Application of the Implicit Particle Filter to Real Data in a Shallow Water Model of the Nearshore Ocean

    NASA Astrophysics Data System (ADS)

    Miller, R.

    2015-12-01

    Following the success of the implicit particle filter in twin experiments with a shallow water model of the nearshore environment, the planned next step is application to the intensive Sandy Duck data set, gathered at Duck, NC. Adaptation of the present system to the Sandy Duck data set will require construction and evaluation of error models for both the model and the data, as well as significant modification of the system to allow for the properties of the data set. Successful implementation of the particle filter promises to shed light on the details of the capabilities and limitations of shallow water models of the nearshore ocean relative to more detailed models. Since the shallow water model admits distinct dynamical regimes, reliable parameter estimation will be important. Previous work by other groups give cause for optimism. In this talk I will describe my progress toward implementation of the new system, including problems solved, pitfalls remaining and preliminary results

  6. [Watershed water environment pollution models and their applications: a review].

    PubMed

    Zhu, Yao; Liang, Zhi-Wei; Li, Wei; Yang, Yi; Yang, Mu-Yi; Mao, Wei; Xu, Han-Li; Wu, Wei-Xiang

    2013-10-01

    Watershed water environment pollution model is the important tool for studying watershed environmental problems. Through the quantitative description of the complicated pollution processes of whole watershed system and its parts, the model can identify the main sources and migration pathways of pollutants, estimate the pollutant loadings, and evaluate their impacts on water environment, providing a basis for watershed planning and management. This paper reviewed the watershed water environment models widely applied at home and abroad, with the focuses on the models of pollutants loading (GWLF and PLOAD), water quality of received water bodies (QUAL2E and WASP), and the watershed models integrated pollutant loadings and water quality (HSPF, SWAT, AGNPS, AnnAGNPS, and SWMM), and introduced the structures, principles, and main characteristics as well as the limitations in practical applications of these models. The other models of water quality (CE-QUAL-W2, EFDC, and AQUATOX) and watershed models (GLEAMS and MIKE SHE) were also briefly introduced. Through the case analysis on the applications of single model and integrated models, the development trend and application prospect of the watershed water environment pollution models were discussed.

  7. EPOC WATER INC. MICROFILTRATION TECHNOLOGY - APPLICATIONS ANALYSIS REPORT

    EPA Science Inventory

    This document is an evaluation of the performance of the EPOC Water, Inc. Microfiltration Technology and its applicability as a treatment technique for water contaminated with metals. oth the technical aspects arid the economics of this technology were examined. Operational data ...

  8. The Foundation GPS Water Vapor Inversion and its Application Research

    NASA Astrophysics Data System (ADS)

    Liu, R.; Lee, T.; Lv, H.; Fan, C.; Liu, Q.

    2018-04-01

    Using GPS technology to retrieve atmospheric water vapor is a new water vapor detection method, which can effectively compensate for the shortcomings of conventional water vapor detection methods, to provide high-precision, large-capacity, near real-time water vapor information. In-depth study of ground-based GPS detection of atmospheric water vapor technology aims to further improve the accuracy and practicability of GPS inversion of water vapor and to explore its ability to detect atmospheric water vapor information to better serve the meteorological services. In this paper, the influence of the setting parameters of initial station coordinates, satellite ephemeris and solution observation on the total delay accuracy of the tropospheric zenith is discussed based on the observed data. In this paper, the observations obtained from the observation network consisting of 8 IGS stations in China in June 2013 are used to inverse the water vapor data of the 8 stations. The data of Wuhan station is further selected and compared with the data of Nanhu Sounding Station in Wuhan The error between the two data was between -6mm-6mm, and the trend of the two was almost the same, the correlation reached 95.8 %. The experimental results also verify the reliability of ground-based GPS inversion of water vapor technology.

  9. Application of reverse osmosis in purifying drinking water

    NASA Astrophysics Data System (ADS)

    Jiang, Lei; Tu, Yue; Li, Xiangmin; Li, Haixiang

    2018-06-01

    In view of the shortage of water resources and the pollution of water environment, the development of efficient water purification technology is one of the effective ways to ensure the safety of drinking water. The technology of reverse osmosis has attracted much attention in the application of drinking water security for its advantages of no phase change and simple equipment. This paper briefly introduces the research progress at home and abroad, basic principles, process flow diagram, advantages and disadvantages and development direction of the technology.

  10. A portable non-contact displacement sensor and its application of lens centration error measurement

    NASA Astrophysics Data System (ADS)

    Yu, Zong-Ru; Peng, Wei-Jei; Wang, Jung-Hsing; Chen, Po-Jui; Chen, Hua-Lin; Lin, Yi-Hao; Chen, Chun-Cheng; Hsu, Wei-Yao; Chen, Fong-Zhi

    2018-02-01

    We present a portable non-contact displacement sensor (NCDS) based on astigmatic method for micron displacement measurement. The NCDS are composed of a collimated laser, a polarized beam splitter, a 1/4 wave plate, an aspheric objective lens, an astigmatic lens and a four-quadrant photodiode. A visible laser source is adopted for easier alignment and usage. The dimension of the sensor is limited to 115 mm x 36 mm x 56 mm, and a control box is used for dealing with signal and power control between the sensor and computer. The NCDS performs micron-accuracy with +/-30 μm working range and the working distance is constrained in few millimeters. We also demonstrate the application of the NCDS for lens centration error measurement, which is similar to the total indicator runout (TIR) or edge thickness difference (ETD) of a lens measurement using contact dial indicator. This application has advantage for measuring lens made in soft materials that would be starched by using contact dial indicator.

  11. Mathematical models of water application for a variable rate irrigating hill-seeder

    USDA-ARS?s Scientific Manuscript database

    A variable rate irrigating hill-seeder can adjust water application automatically according to the difference in soil moisture content in the field to alleviate drought and save water. Two key problems to realize variable rate water application are how to determine the right amount of water for the ...

  12. Mathematic models of water application for a variable rate irrigating hill-seeder

    USDA-ARS?s Scientific Manuscript database

    A variable rate irrigating hill-seeder can adjust water application automatically according to the difference in soil moisture content in the field to alleviate drought and save water. Two key problems to realize variable rate water application are how to determine the right amount of water for the ...

  13. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  14. Intervention strategies for the management of human error

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  15. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    NASA Astrophysics Data System (ADS)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  16. Near term application of water cooling

    NASA Astrophysics Data System (ADS)

    Horner, M. W.; Caruvana, A.; Cohn, A.; Smith, D. P.

    1980-03-01

    The paper presents studies of combined gas and steam-turbine cycles related to the near term application of water cooling technology to the commercial gas turbine operating on heavy residual oil or coal derived liquid fuels. Water cooling promises significant reduction of hot corrosion and ash deposition at the turbine first-stage nozzle. It was found that: (1) corrosion of some alloys in the presence of alkali contaminant was less as metal temperatures were lowered to the 800-1000 F range, (2) the rate of ash deposition is increased for air-cooled and water-cooled nozzles at the 2060 F turbine firing temperature compared to 1850 F, (3) the ash deposit for the water cooled nozzle was lighter and more easily removed at both 1850 and 2050 F, (4) on-line nutshelling was effective on the water-cooled nozzles even at 2050 F, and (5) the data indicates that the rate of ash deposition may be sensitive to surface wall temperatures.

  17. Remote Sensing Applications with High Reliability in Changjiang Water Resource Management

    NASA Astrophysics Data System (ADS)

    Ma, L.; Gao, S.; Yang, A.

    2018-04-01

    Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR) composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  18. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  19. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  20. Errors in imaging patients in the emergency setting

    PubMed Central

    Reginelli, Alfonso; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca

    2016-01-01

    Emergency and trauma care produces a “perfect storm” for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955

  1. Errors in imaging patients in the emergency setting.

    PubMed

    Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca

    2016-01-01

    Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.

  2. Class-specific Error Bounds for Ensemble Classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prenger, R; Lemmond, T; Varshney, K

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less

  3. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our

  4. User manuals for the Delaware River Basin Water Availability Tool for Environmental Resources (DRB–WATER) and associated WATER application utilities

    USGS Publications Warehouse

    Williamson, Tanja N.; Lant, Jeremiah G.

    2015-11-18

    The Water Availability Tool for Environmental Resources (WATER) is a decision support system (DSS) for the nontidal part of the Delaware River Basin (DRB) that provides a consistent and objective method of simulating streamflow under historical, forecasted, and managed conditions. WATER integrates geospatial sampling of landscape characteristics, including topographic and soil properties, with a regionally calibrated hillslope-hydrology model, an impervious-surface model, and hydroclimatic models that have been parameterized using three hydrologic response units—forested, agricultural, and developed land cover. It is this integration that enables the regional hydrologic-modeling approach used in WATER without requiring site-specific optimization or those stationary conditions inferred when using a statistical model. The DSS provides a “historical” database, ideal for simulating streamflow for 2001–11, in addition to land-cover forecasts that focus on 2030 and 2060. The WATER Application Utilities are provided with the DSS and apply change factors for precipitation, temperature, and potential evapotranspiration to a 1981–2011 climatic record provided with the DSS. These change factors were derived from a suite of general circulation models (GCMs) and representative concentration pathway (RCP) emission scenarios. These change factors are based on 25-year monthly averages (normals) that are centere on 2030 and 2060. The WATER Application Utilities also can be used to apply a 2010 snapshot of water use for the DRB; a factorial approach enables scenario testing of increased or decreased water use for each simulation. Finally, the WATER Application Utilities can be used to reformat streamflow time series for input to statistical or reservoir management software. 

  5. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  6. Storm Water Management Model Applications Manual

    EPA Science Inventory

    The EPA Storm Water Management Model (SWMM) is a dynamic rainfall-runoff simulation model that computes runoff quantity and quality from primarily urban areas. This manual is a practical application guide for new SWMM users who have already had some previous training in hydrolog...

  7. Exploring Situational Awareness in Diagnostic Errors in Primary Care

    PubMed Central

    Singh, Hardeep; Giardina, Traber Davis; Petersen, Laura A.; Smith, Michael; Wilson, Lindsey; Dismukes, Key; Bhagwath, Gayathri; Thomas, Eric J.

    2013-01-01

    Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs. PMID:21890757

  8. The Watchdog Task: Concurrent error detection using assertions

    NASA Technical Reports Server (NTRS)

    Ersoz, A.; Andrews, D. M.; Mccluskey, E. J.

    1985-01-01

    The Watchdog Task, a software abstraction of the Watchdog-processor, is shown to be a powerful error detection tool with a great deal of flexibility and the advantages of watchdog techniques. A Watchdog Task system in Ada is presented; issues of recovery, latency, efficiency (communication) and preprocessing are discussed. Different applications, one of which is error detection on a single processor, are examined.

  9. Reduced discretization error in HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less

  10. Near field communications technology and the potential to reduce medication errors through multidisciplinary application

    PubMed Central

    Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J.; Tabirca, Sabin; O’Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Background Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. Methods An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. Results A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). Conclusions An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment. PMID:28293602

  11. Near field communications technology and the potential to reduce medication errors through multidisciplinary application.

    PubMed

    O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.

  12. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Application of hydrodynamic cavitation in ballast water treatment.

    PubMed

    Cvetković, Martina; Kompare, Boris; Klemenčič, Aleksandra Krivograd

    2015-05-01

    Ballast water is, together with hull fouling and aquaculture, considered the most important factor of the worldwide transfer of invasive non-indigenous organisms in aquatic ecosystems and the most important factor in European Union. With the aim of preventing and halting the spread of the transfer of invasive organisms in aquatic ecosystems and also in accordance with IMO's International Convention for the Control and Management of Ships Ballast Water and Sediments, the systems for ballast water treatment, whose work includes, e.g. chemical treatment, ozonation and filtration, are used. Although hydrodynamic cavitation (HC) is used in many different areas, such as science and engineering, implied acoustics, biomedicine, botany, chemistry and hydraulics, the application of HC in ballast water treatment area remains insufficiently researched. This paper presents the first literature review that studies lab- and large-scale setups for ballast water treatment together with the type-approved systems currently available on the market that use HC as a step in their operation. This paper deals with the possible advantages and disadvantages of such systems, as well as their influence on the crew and marine environment. It also analyses perspectives on the further development and application of HC in ballast water treatment.

  14. Application of BIM Technology in Building Water Supply and Drainage Design

    NASA Astrophysics Data System (ADS)

    Wei, Tianyun; Chen, Guiqing; Wang, Junde

    2017-12-01

    Through the application of BIM technology, the idea of building water supply and drainage designers can be related to the model, the various influencing factors to affect water supply and drainage design can be considered more comprehensively. BIM(Building information model) technology assist in improving the design process of building water supply and drainage, promoting the building water supply and drainage planning, enriching the building water supply and drainage design method, improving the water supply and drainage system design level and building quality. Combined with fuzzy comprehensive evaluation method to analyze the advantages of BIM technology in building water supply and drainage design. Therefore, application prospects of BIM technology are very worthy of promotion.

  15. Stochastic estimation of plant-available soil water under fluctuating water table depths

    NASA Astrophysics Data System (ADS)

    Or, Dani; Groeneveld, David P.

    1994-12-01

    Preservation of native valley-floor phreatophytes while pumping groundwater for export from Owens Valley, California, requires reliable predictions of plant water use. These predictions are compared with stored soil water within well field regions and serve as a basis for managing groundwater resources. Soil water measurement errors, variable recharge, unpredictable climatic conditions affecting plant water use, and modeling errors make soil water predictions uncertain and error-prone. We developed and tested a scheme based on soil water balance coupled with implementation of Kalman filtering (KF) for (1) providing physically based soil water storage predictions with prediction errors projected from the statistics of the various inputs, and (2) reducing the overall uncertainty in both estimates and predictions. The proposed KF-based scheme was tested using experimental data collected at a location on the Owens Valley floor where the water table was artificially lowered by groundwater pumping and later allowed to recover. Vegetation composition and per cent cover, climatic data, and soil water information were collected and used for developing a soil water balance. Predictions and updates of soil water storage under different types of vegetation were obtained for a period of 5 years. The main results show that: (1) the proposed predictive model provides reliable and resilient soil water estimates under a wide range of external conditions; (2) the predicted soil water storage and the error bounds provided by the model offer a realistic and rational basis for decisions such as when to curtail well field operation to ensure plant survival. The predictive model offers a practical means for accommodating simple aspects of spatial variability by considering the additional source of uncertainty as part of modeling or measurement uncertainty.

  16. Do calculation errors by nurses cause medication errors in clinical practice? A literature review.

    PubMed

    Wright, Kerri

    2010-01-01

    This review aims to examine the literature available to ascertain whether medication errors in clinical practice are the result of nurses' miscalculating drug dosages. The research studies highlighting poor calculation skills of nurses and student nurses have been tested using written drug calculation tests in formal classroom settings [Kapborg, I., 1994. Calculation and administration of drug dosage by Swedish nurses, student nurses and physicians. International Journal for Quality in Health Care 6(4): 389 -395; Hutton, M., 1998. Nursing Mathematics: the importance of application Nursing Standard 13(11): 35-38; Weeks, K., Lynne, P., Torrance, C., 2000. Written drug dosage errors made by students: the threat to clinical effectiveness and the need for a new approach. Clinical Effectiveness in Nursing 4, 20-29]; Wright, K., 2004. Investigation to find strategies to improve student nurses' maths skills. British Journal Nursing 13(21) 1280-1287; Wright, K., 2005. An exploration into the most effective way to teach drug calculation skills to nursing students. Nurse Education Today 25, 430-436], but there have been no reviews of the literature on medication errors in practice that specifically look to see whether the medication errors are caused by nurses' poor calculation skills. The databases Medline, CINAHL, British Nursing Index (BNI), Journal of American Medical Association (JAMA) and Archives and Cochrane reviews were searched for research studies or systematic reviews which reported on the incidence or causes of drug errors in clinical practice. In total 33 articles met the criteria for this review. There were no studies that examined nurses' drug calculation errors in practice. As a result studies and systematic reviews that investigated the types and causes of drug errors were examined to establish whether miscalculations by nurses were the causes of errors. The review found insufficient evidence to suggest that medication errors are caused by nurses' poor

  17. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    NASA Astrophysics Data System (ADS)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross

  18. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    PubMed

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  19. Sample presentation, sources of error and future perspectives on the application of vibrational spectroscopy in the wine industry.

    PubMed

    Cozzolino, Daniel

    2015-03-30

    Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.

  20. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  1. A Unified Approach to Measurement Error and Missing Data: Overview and Applications

    ERIC Educational Resources Information Center

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model…

  2. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effortmore » has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3

  3. PRN 93-3: Labeling Statement Prohibiting Application to Water

    EPA Pesticide Factsheets

    This notice explaining the policy on label statement prohibiting pesticide application to water pertains only to the labeling statement on pesticide products. It does not address the term wetlands as defined with respect to the Clean Water Act.

  4. A robust Multi-Band Water Index (MBWI) for automated extraction of surface water from Landsat 8 OLI imagery

    NASA Astrophysics Data System (ADS)

    Wang, Xiaobiao; Xie, Shunping; Zhang, Xueliang; Chen, Cheng; Guo, Hao; Du, Jinkang; Duan, Zheng

    2018-06-01

    Surface water is vital resources for terrestrial life, while the rapid development of urbanization results in diverse changes in sizes, amounts, and quality of surface water. To accurately extract surface water from remote sensing imagery is very important for water environment conservations and water resource management. In this study, a new Multi-Band Water Index (MBWI) for Landsat 8 Operational Land Imager (OLI) images is proposed by maximizing the spectral difference between water and non-water surfaces using pure pixels. Based on the MBWI map, the K-means cluster method is applied to automatically extract surface water. The performance of MBWI is validated and compared with six widely used water indices in 29 sites of China. Results show that our proposed MBWI performs best with the highest accuracy in 26 out of the 29 test sites. Compared with other water indices, the MBWI results in lower mean water total errors by a range of 9.31%-25.99%, and higher mean overall accuracies and kappa coefficients by 0.87%-3.73% and 0.06-0.18, respectively. It is also demonstrated for MBWI in terms of robustly discriminating surface water from confused backgrounds that are usually sources of surface water extraction errors, e.g., mountainous shadows and dark built-up areas. In addition, the new index is validated to be able to mitigate the seasonal and daily influences resulting from the variations of the solar condition. MBWI holds the potential to be a useful surface water extraction technology for water resource studies and applications.

  5. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  6. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  7. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  8. Mapping and correction of the CMM workspace error with the use of an electronic gyroscope and neural networks--practical application.

    PubMed

    Swornowski, Pawel J

    2013-01-01

    The article presents the application of neural networks in determining and correction of the deformation of a coordinate measuring machine (CMM) workspace. The information about the CMM errors is acquired using an ADXRS401 electronic gyroscope. A test device (PS-20 module) was built and integrated with a commercial measurement system based on the SP25M passive scanning probe and with a PH10M module (Renishaw). The proposed solution was tested on a Kemco 600 CMM and on a DEA Global Clima CMM. In the former case, correction of the CMM errors was performed using the source code of WinIOS software owned by The Institute of Advanced Manufacturing Technology, Cracow, Poland and in the latter on an external PC. Optimum parameters of full and simplified mapping of a given layer of the CMM workspace were determined for practical applications. The proposed method can be employed for the interim check (ISO 10360-2 procedure) or to detect local CMM deformations, occurring when the CMM works at high scanning speeds (>20 mm/s). © Wiley Periodicals, Inc.

  9. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  10. An optical water type framework for selecting and blending retrievals from bio-optical algorithms in lakes and coastal waters

    PubMed Central

    Moore, Timothy S.; Dowell, Mark D.; Bradt, Shane; Verdu, Antonio Ruiz

    2014-01-01

    Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll-a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll-a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll-a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll-a algorithms—the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands—with RMS error of 0.416 and 0.437 for each in log chlorophyll-a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll-a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll-a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie. PMID:24839311

  11. An optical water type framework for selecting and blending retrievals from bio-optical algorithms in lakes and coastal waters.

    PubMed

    Moore, Timothy S; Dowell, Mark D; Bradt, Shane; Verdu, Antonio Ruiz

    2014-03-05

    Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll- a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll- a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll- a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll- a algorithms-the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands-with RMS error of 0.416 and 0.437 for each in log chlorophyll- a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll- a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll- a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie.

  12. Data assimilation with soil water content sensors and pedotransfer functions in soil water flow modeling

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on a set of simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Soil water content monitoring data can be used to reduce the errors in models. Data assimilation (...

  13. Applicability of Kinematic and Diffusive models for mud-flows: a steady state analysis

    NASA Astrophysics Data System (ADS)

    Di Cristo, Cristiana; Iervolino, Michele; Vacca, Andrea

    2018-04-01

    The paper investigates the applicability of Kinematic and Diffusive Wave models for mud-flows with a power-law shear-thinning rheology. In analogy with a well-known approach for turbulent clear-water flows, the study compares the steady flow depth profiles predicted by approximated models with those of the Full Dynamic Wave one. For all the models and assuming an infinitely wide channel, the analytical solution of the flow depth profiles, in terms of hypergeometric functions, is derived. The accuracy of the approximated models is assessed by computing the average, along the channel length, of the errors, for several values of the Froude and kinematic wave numbers. Assuming the threshold value of the error equal to 5%, the applicability conditions of the two approximations have been individuated for several values of the power-law exponent, showing a crucial role of the rheology. The comparison with the clear-water results indicates that applicability criteria for clear-water flows do not apply to shear-thinning fluids, potentially leading to an incorrect use of approximated models if the rheology is not properly accounted for.

  14. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  15. An ABC estimate of pedigree error rate: application in dog, sheep and cattle breeds.

    PubMed

    Leroy, G; Danchin-Burge, C; Palhiere, I; Baumung, R; Fritz, S; Mériaux, J C; Gautier, M

    2012-06-01

    On the basis of correlations between pairwise individual genealogical kinship coefficients and allele sharing distances computed from genotyping data, we propose an approximate Bayesian computation (ABC) approach to assess pedigree file reliability through gene-dropping simulations. We explore the features of the method using simulated data sets and show precision increases with the number of markers. An application is further made with five dog breeds, four sheep breeds and one cattle breed raised in France and displaying various characteristics and population sizes, using microsatellite or SNP markers. Depending on the breeds, pedigree error estimations range between 1% and 9% in dog breeds, 1% and 10% in sheep breeds and 4% in cattle breeds. © 2011 The Authors, Animal Genetics © 2011 Stichting International Foundation for Animal Genetics.

  16. Radiation processing applications in the Czechoslovak water treatment technologies

    NASA Astrophysics Data System (ADS)

    Vacek, K.; Pastuszek, F.; Sedláček, M.

    The regeneration of biologically clogged water wells by radiation proved to be a successful and economically beneficial process among other promising applications of ionizing radiation in the water supply technology. The application conditions and experience are mentioned. The potential pathogenic Mycobacteria occuring in the warm washing and bathing water are resistant against usual chlorine and ozone concentrations. The radiation sensitivity of Mycobacteria allowed to suggest a device for their destroying by radiation. Some toxic substances in the underground water can be efficiently degraded by gamma radiation directly in the wells drilled as a hydraulic barrier surrounding the contaminated land area. Substantial decrease of CN - concentration and C.O.D. value was observed in water pumped from such well equipped with cobalt sources and charcoal. The removing of pathogenic contamination remains to be the main goal of radiation processing in the water purification technologies. The decrease of liquid sludge specific filter resistance and sedimentation acceleration by irradiation have a minor technological importance. The hygienization of sludge cake from the mechanical belt filter press by electron beam appears to be the optimum application in the Czechoslovak conditions. The potatoes and barley crop yields from experimental plots treated with sludge were higher in comparison with using the manure. Biological sludge from the municipal and food industry water purification plants contains nutritive components. The proper hygienization is a necessary condition for using them as a livestock feed supplement. Feeding experiments with broilers and pigs confirmed the possibility of partial (e.g. 50%) replacement of soya-, bone- or fish flour in feed mixtures by dried sludge hygienized either by heat or by the irradiation.

  17. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  18. Evaluation of circularity error in drilling of syntactic foam composites

    NASA Astrophysics Data System (ADS)

    Ashrith H., S.; Doddamani, Mrityunjay; Gaitonde, Vinayak

    2018-04-01

    Syntactic foams are widely used in structural applications of automobiles, aircrafts and underwater vehicles due to their lightweight properties combined with high compression strength and low moisture absorption. Structural application requires drilling of holes for assembly purpose. In this investigation response surface methodology based mathematical models are used to analyze the effects of cutting speed, feed, drill diameter and filler content on circularity error both at entry and exit level in drilling of glass microballoon reinforced epoxy syntactic foam. Experiments are conducted based on full factorial design using solid coated tungsten carbide twist drills. The parametric analysis reveals that circularity error is highly influenced by drill diameter followed by spindle speed at the entry and exit level. Parametric analysis also reveals that increasing filler content decreases circularity error by 13.65 and 11.96% respectively at entry and exit levels. Average circularity error at the entry level is found to be 23.73% higher than at the exit level.

  19. Theoretical Calculation and Validation of the Water Vapor Continuum Absorption

    NASA Technical Reports Server (NTRS)

    Ma, Qiancheng; Tipping, Richard H.

    1998-01-01

    The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multispectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/sq m, which compared to the 4 W/sq m magnitude of the greenhouse gas forcing and the 1-2 W/sq m estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude

  20. Theoretical Calculation and Validation of the Water Vapor Continuum Absorption

    NASA Technical Reports Server (NTRS)

    Ma, Qiancheng; Tipping, Richard H.

    1998-01-01

    The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multi-spectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/ml, which compared to the 4 W/m' magnitude of the greenhouse gas forcing and the 1-2 W/m' estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning

  1. Impact of Forecast and Model Error Correlations In 4dvar Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zupanski, M.; Zupanski, D.; Vukicevic, T.; Greenwald, T.; Eis, K.; Vonder Haar, T.

    A weak-constraint 4DVAR data assimilation system has been developed at Cooper- ative Institute for Research in the Atmosphere (CIRA), Colorado State University. It is based on the NCEP's ETA 4DVAR system, and it is fully parallel (MPI coding). The CIRA's 4DVAR system is aimed for satellite data assimilation research, with cur- rent focus on assimilation of cloudy radiances and microwave satellite measurements. Most important improvement over the previous 4DVAR system is a degree of gener- ality introduced into the new algorithm, namely for applications with different NWP models (e.g., RAMS, WRF, ETA, etc.), and for the choice of control variable. In cur- rent applications, the non-hydrostatic RAMS model and its adjoint are used, including all microphysical processess. The control variable includes potential temperature, ve- locity potential and stream function, vertical velocity, and seven mixing ratios with respect to all water phases. Since the statistics of the microphysical components of the control variable is not well known, a special attention will be paid to the impact of the forecast and model (prior) error correlations on the 4DVAR analysis. In particular, the sensitivity of the analysis with respect to decorrelation length will be examined. The prior error covariances are modelled using the compactly-supported, space-limited correlations developed at NASA DAO.

  2. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    NASA Astrophysics Data System (ADS)

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  3. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  4. An error analysis perspective for patient alignment systems.

    PubMed

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  5. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  6. A Study of Assessing Errors and Completeness of Research Application Forms Submitted to Instituitional Ethics Committee (IEC) of a Tertiary Care Hospital.

    PubMed

    Shah, Pruthak C; Panchasara, Ashwin K; Barvaliya, Manish J; Tripathi, C B

    2016-09-01

    Application form of research work is an essential requirement which is required to be submitted along with the research proposal to the Ethics Committee (EC). To check the completeness and to find the errors in application forms submitted to the EC of a tertiary care hospital. The application forms of research projects submitted to the Institutional Review Board (IRB), Government Medical College, Bhavnagar, Gujarat, India from January 2014 to June 2015 were analysed for completeness and errors, with respect to the following - type of study, information about study investigators, sample size, study participants, title of the studies, signatures of all investigators, regulatory approval, recruitment procedure, compensation to study participants, informed consent process, information about sponsor, declaration of conflict of interest, plans for storage and maintenance of data, patient information sheet, informed consent forms and study related documents. Total 100 application forms were analysed. Among them, 98 were academic and 2 were industrial studies. Majority of academic studies were of basic science type. In 63.26% studies, type of study was not mentioned in title. Age group of subjects was not mentioned in 8.16% application forms. In 34.6% informed consent, benefits of the study were not mentioned. Signature of investigators/co-investigators/Head of the Department was missing in 3.06% cases. Our study recommends that the efficiency and speed of review will increase if investigator will increase vigilance regarding filling of application forms. Regular meetings will be helpful to solve the problems related to content of application forms. The uniformity in functioning of EC can be achieved if common application form for all ECs is there.

  7. Digital Mirror Device Application in Reduction of Wave-front Phase Errors

    PubMed Central

    Zhang, Yaping; Liu, Yan; Wang, Shuxue

    2009-01-01

    In order to correct the image distortion created by the mixing/shear layer, creative and effectual correction methods are necessary. First, a method combining adaptive optics (AO) correction with a digital micro-mirror device (DMD) is presented. Second, performance of an AO system using the Phase Diverse Speckle (PDS) principle is characterized in detail. Through combining the DMD method with PDS, a significant reduction in wavefront phase error is achieved in simulations and experiments. This kind of complex correction principle can be used to recovery the degraded images caused by unforeseen error sources. PMID:22574016

  8. [Applicability of agricultural production systems simulator (APSIM) in simulating the production and water use of wheat-maize continuous cropping system in North China Plain].

    PubMed

    Wang, Lin; Zheng, You-fei; Yu, Qiang; Wang, En-li

    2007-11-01

    The Agricultural Production Systems Simulator (APSIM) was applied to simulate the 1999-2001 field experimental data and the 2002-2003 water use data at the Yucheng Experiment Station under Chinese Ecosystem Research Network, aimed to verify the applicability of the model to the wheat-summer maize continuous cropping system in North China Plain. The results showed that the average errors of the simulations of leaf area index (LAI), biomass, and soil moisture content in 1999-2000 and 2000-2001 field experiments were 27.61%, 24.59% and 7.68%, and 32.65%, 35.95% and 10.26%, respectively, and those of LAI and biomass on the soils with high and low moisture content in 2002-2003 were 26.65% and 14.52%, and 23.91% and 27.93%, respectively. The simulations of LAI and biomass accorded well with the measured values, with the coefficients of determination being > 0.85 in 1999-2000 and 2002-2003, and 0.78 in 2000-2001, indicating that APSIM had a good applicability in modeling the crop biomass and soil moisture content in the continuous cropping system, but the simulation error of LAI was a little larger.

  9. Applications of nanotechnology in water and wastewater treatment.

    PubMed

    Qu, Xiaolei; Alvarez, Pedro J J; Li, Qilin

    2013-08-01

    Providing clean and affordable water to meet human needs is a grand challenge of the 21st century. Worldwide, water supply struggles to keep up with the fast growing demand, which is exacerbated by population growth, global climate change, and water quality deterioration. The need for technological innovation to enable integrated water management cannot be overstated. Nanotechnology holds great potential in advancing water and wastewater treatment to improve treatment efficiency as well as to augment water supply through safe use of unconventional water sources. Here we review recent development in nanotechnology for water and wastewater treatment. The discussion covers candidate nanomaterials, properties and mechanisms that enable the applications, advantages and limitations as compared to existing processes, and barriers and research needs for commercialization. By tracing these technological advances to the physicochemical properties of nanomaterials, the present review outlines the opportunities and limitations to further capitalize on these unique properties for sustainable water management. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. New class of photonic quantum error correction codes

    NASA Astrophysics Data System (ADS)

    Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.

    We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.

  11. Fusing metabolomics data sets with heterogeneous measurement errors

    PubMed Central

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  12. Engineering water repellency in granular materials for ground applications

    NASA Astrophysics Data System (ADS)

    Lourenco, Sergio; Saulick, Yunesh; Zheng, Shuang; Kang, Hengyi; Liu, Deyun; Lin, Hongjie

    2017-04-01

    Synthetic water repellent granular materials are a novel technology for constructing water-tight barriers and fills that is both inexpensive and reliant on an abundant local resource - soils. Our research is verifying its stability, so that perceived risks to practical implementation are identified and alleviated. Current ground stabilization measures are intrusive and use concrete, steel, and glass fibres as reinforcement elements (e.g. soil nails), so more sustainable approaches that require fewer raw materials are strongly recommended. Synthetic water repellent granular materials, with persistent water repellency, have been tested for water harvesting and proposed as landfill and slope covers. By chemically, physically and biologically adjusting the magnitude of water repellency, they offer the unique advantage of controlling water infiltration and allow their deployment as semi-permeable or impermeable materials. Other advantages include (1) volumetric stability, (2) high air permeability and low water permeability, (3) suitability for flexible applications (permanent and temporary usage), (4) improved adhesion aggregate-bitumen in pavements. Application areas include hydraulic barriers (e.g. for engineered slopes and waste containment), pavements and other waterproofing systems. Chemical treatments to achieve water repellency include the use of waxes, oils and silicone polymers which affect the soil particles at sub-millimetric scales. To date, our research has been aimed at demonstrating their use as slope covers and establishing the chemical compounds that develop high and stable water repellency. Future work will determine the durability of the water repellent coatings and the mechanics and modelling of processes in such soils.

  13. Water quality at a biosolids-application area near Deer Trail, Colorado, 1993-1999

    USGS Publications Warehouse

    Yager, Tracy J.B.

    2014-01-01

    The Metro Wastewater Reclamation District (Metro District) in Denver, Colo., applied biosolids resulting from municipal sewage treatment to farmland in eastern Colorado beginning in December 1993. In mid-1993, the U.S. Geological Survey in cooperation with the Metro District began monitoring water quality at the biosolids-application area about 10 miles east of Deer Trail, Colo., to evaluate baseline water quality and the combined effects of natural processes, land uses, and biosolids applications on water quality of the biosolids application area. Water quality was characterized by baseline and post-biosolids-application sampling for selected inorganic and bacteriological constituents during 1993 through 1998, with some additional specialized sampling in 1999. The study included limited sampling of surface water and the unsaturated zone, but primarily focused on groundwater. See report for complete abstract.

  14. Inorganic Membranes: Preparation and Application for Water Treatment and Desalination

    PubMed Central

    McKay, Gordon; Buekenhoudt, Anita; Motmans, Filip; Khraisheh, Marwan; Atieh, Muataz

    2018-01-01

    Inorganic membrane science and technology is an attractive field of membrane separation technology, which has been dominated by polymer membranes. Recently, the inorganic membrane has been undergoing rapid development and innovation. Inorganic membranes have the advantage of resisting harsh chemical cleaning, high temperature and wear resistance, high chemical stability, long lifetime, and autoclavable. All of these outstanding properties made inorganic membranes good candidates to be used for water treatment and desalination applications. This paper is a state of the art review on the synthesis, development, and application of different inorganic membranes for water and wastewater treatment. The inorganic membranes reviewed in this paper include liquid membranes, dynamic membranes, various ceramic membranes, carbon based membranes, silica membranes, and zeolite membranes. A brief description of the different synthesis routes for the development of inorganic membranes for application in water industry is given and each synthesis rout is critically reviewed and compared. Thereafter, the recent studies on different application of inorganic membrane and their properties for water treatment and desalination in literature are critically summarized. It was reported that inorganic membranes despite their high synthesis cost, showed very promising results with high flux, full salt rejection, and very low or no fouling. PMID:29304024

  15. Evaluation and error apportionment of an ensemble of ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact

  16. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  17. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    NASA Astrophysics Data System (ADS)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data

  18. Application of an optimization algorithm to satellite ocean color imagery: A case study in Southwest Florida coastal waters

    NASA Astrophysics Data System (ADS)

    Hu, Chuanmin; Lee, Zhongping; Muller-Karger, Frank E.; Carder, Kendall L.

    2003-05-01

    A spectra-matching optimization algorithm, designed for hyperspectral sensors, has been implemented to process SeaWiFS-derived multi-spectral water-leaving radiance data. The algorithm has been tested over Southwest Florida coastal waters. The total spectral absorption and backscattering coefficients can be well partitioned with the inversion algorithm, resulting in RMS errors generally less than 5% in the modeled spectra. For extremely turbid waters that come from either river runoff or sediment resuspension, the RMS error is in the range of 5-15%. The bio-optical parameters derived in this optically complex environment agree well with those obtained in situ. Further, the ability to separate backscattering (a proxy for turbidity) from the satellite signal makes it possible to trace water movement patterns, as indicated by the total absorption imagery. The derived patterns agree with those from concurrent surface drifters. For waters where CDOM overwhelmingly dominates the optical signal, however, the procedure tends to regard CDOM as the sole source of absorption, implying the need for better atmospheric correction and for adjustment of some model coefficients for this particular region.

  19. [Validation of a method for notifying and monitoring medication errors in pediatrics].

    PubMed

    Guerrero-Aznar, M D; Jiménez-Mesa, E; Cotrina-Luque, J; Villalba-Moreno, A; Cumplido-Corbacho, R; Fernández-Fernández, L

    2014-12-01

    To analyze the impact of a multidisciplinary and decentralized safety committee in the pediatric management unit, and the joint implementation of a computing network application for reporting medication errors, monitoring the follow-up of the errors, and an analysis of the improvements introduced. An observational, descriptive, cross-sectional, pre-post intervention study was performed. An analysis was made of medication errors reported to the central safety committee in the twelve months prior to introduction, and those reported to the decentralized safety committee in the management unit in the nine months after implementation, using the computer application, and the strategies generated by the analysis of reported errors. Number of reported errors/10,000 days of stay, number of reported errors with harm per 10,000 days of stay, types of error, categories based on severity, stage of the process, and groups involved in the notification of medication errors. Reported medication errors increased 4.6 -fold, from 7.6 notifications of medication errors per 10,000 days of stay in the pre-intervention period to 36 in the post-intervention, rate ratio 0.21 (95% CI; 0.11-0.39) (P<.001). The medication errors with harm or requiring monitoring reported per 10,000 days of stay, was virtually unchanged from one period to the other ratio rate 0,77 (95% IC; 0,31-1,91) (P>.05). The notification of potential errors or errors without harm per 10,000 days of stay increased 17.4-fold (rate ratio 0.005., 95% CI; 0.001-0.026, P<.001). The increase in medication errors notified in the post-intervention period is a reflection of an increase in the motivation of health professionals to report errors through this new method. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  20. Application of ion-sensitive sensors in water quality monitoring.

    PubMed

    Winkler, S; Rieger, L; Saracevic, E; Pressl, A; Gruber, G

    2004-01-01

    Within the last years a trend towards in-situ monitoring can be observed, i.e. most new sensors for water quality monitoring are designed for direct installation in the medium, compact in size and use measurement principles which minimise maintenance demand. Ion-sensitive sensors (Ion-Sensitive-Electrode--ISE) are based on a well known measurement principle and recently some manufacturers have released probe types which are specially adapted for application in water quality monitoring. The function principle of ISE-sensors, their advantages, limitations and the different methods for sensor calibration are described. Experiences with ISE-sensors from applications in sewer networks, at different sampling points within wastewater treatment plants and for surface water monitoring are reported. An estimation of investment and operation costs in comparison to other sensor types is given.

  1. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  2. Chemical application strategies to protect water quality

    USDA-ARS?s Scientific Manuscript database

    Management of turfgrass on golf courses and athletic fields often involves application of plant protection products to maintain or enhance turfgrass health and performance. However, the transport of fertilizer and pesticides with runoff to adjacent surface waters can enhance algal blooms, promote e...

  3. [Improving blood safety: errors management in transfusion medicine].

    PubMed

    Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana

    2014-01-01

    The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.

  4. An Introduction to Error Analysis for Quantitative Chemistry

    ERIC Educational Resources Information Center

    Neman, R. L.

    1972-01-01

    Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

  5. Associations between errors and contributing factors in aircraft maintenance

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2003-01-01

    In recent years cognitive error models have provided insights into the unsafe acts that lead to many accidents in safety-critical environments. Most models of accident causation are based on the notion that human errors occur in the context of contributing factors. However, there is a lack of published information on possible links between specific errors and contributing factors. A total of 619 safety occurrences involving aircraft maintenance were reported using a self-completed questionnaire. Of these occurrences, 96% were related to the actions of maintenance personnel. The types of errors that were involved, and the contributing factors associated with those actions, were determined. Each type of error was associated with a particular set of contributing factors and with specific occurrence outcomes. Among the associations were links between memory lapses and fatigue and between rule violations and time pressure. Potential applications of this research include assisting with the design of accident prevention strategies, the estimation of human error probabilities, and the monitoring of organizational safety performance.

  6. Latent error detection: A golden two hours for detection.

    PubMed

    Saward, Justin R E; Stanton, Neville A

    2017-03-01

    Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  7. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  8. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    NASA Astrophysics Data System (ADS)

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  9. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  10. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  11. Selective demineralization of water by nanofiltration application to the defluorination of brackish water.

    PubMed

    Lhassani, A; Rumeau, M; Benjelloun, D; Pontie, M

    2001-09-01

    Nanofiltration is generally used to separate monovalent ions from divalent ions, but it is also possible to separate ions of the same valency by careful application of the transfer mechanisms involved. Analysis of the retention of halide salts reveals that small ions like fluoride are the best retained, and that this is even more marked under reduced pressure when selectivity is greatest. The selectivity desalination of fluorinated brackish water is hence feasible and drinking water can be produced directly at much lower cost than using reverse osmosis by optimizing the pressure for the type of water treated.

  12. Structure/property relationships in polymer membranes for water purification and energy applications

    NASA Astrophysics Data System (ADS)

    Geise, Geoffrey

    Providing sustainable supplies of purified water and energy is a critical global challenge for the future, and polymer membranes will play a key role in addressing these clear and pressing global needs for water and energy. Polymer membrane-based processes dominate the desalination market, and polymer membranes are crucial components in several rapidly developing power generation and storage applications that rely on membranes to control rates of water and/or ion transport. Much remains unknown about the influence of polymer structure on intrinsic water and ion transport properties, and these relationships must be developed to design next generation polymer membrane materials. For desalination applications, polymers with simultaneously high water permeability and low salt permeability are desirable in order to prepare selective membranes that can efficiently desalinate water, and a tradeoff relationship between water/salt selectivity and water permeability suggests that attempts to prepare such materials should rely on approaches that do more than simply vary polymer free volume. One strategy is to functionalize hydrocarbon polymers with fixed charge groups that can ionize upon exposure to water, and the presence of charged groups in the polymer influences transport properties. Additionally, in many emerging energy applications, charged polymers are exposed to ions that are very different from sodium and chloride. Specific ion effects have been observed in charged polymers, and these effects must be understood to prepare charged polymers that will enable emerging energy technologies. This presentation discusses research aimed at further understanding fundamental structure/property relationships that govern water and ion transport in charged polymer films considered for desalination and electric potential field-driven applications that can help address global needs for clean water and energy.

  13. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    PubMed

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed

  14. Efficient Variational Quantum Simulator Incorporating Active Error Minimization

    NASA Astrophysics Data System (ADS)

    Li, Ying; Benjamin, Simon C.

    2017-04-01

    One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.

  15. Error-Detecting Identification Codes for Algebra Students.

    ERIC Educational Resources Information Center

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  16. Hyperspectral imaging of water quality - past applications and future directions.

    NASA Astrophysics Data System (ADS)

    Ross, M. R. V.; Pavelsky, T.

    2017-12-01

    Inland waters control the delivery of sediment, carbon, and nutrients from land to ocean by transforming, depositing, and transporting constituents downstream. However, the dominant in situ conditions that control these processes are poorly constrained, especially at larger spatial scales. Hyperspectral imaging, a remote sensing technique that uses reflectance in hundreds of narrow spectral bands, can be used to estimate water quality parameters like sediment and carbon concentration over larger water bodies. Here, we review methods and applications for using hyperspectral imagery to generate near-surface two-dimensional models of water quality in lakes and rivers. Further, we show applications using newly available data from the National Ecological Observation Network aerial observation platform in the Black Warrior and Tombigbee Rivers, Alabama. We demonstrate large spatial variation in chlorophyll, colored dissolved organic matter, and turbidity in each river and uneven mixing of water quality constituents for several kilometers. Finally, we demonstrate some novel techniques using hyperspectral imagery to deconvolve dissolved organic matter spectral signatures to specific organic matter components.

  17. An error taxonomy system for analysis of haemodialysis incidents.

    PubMed

    Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

    2014-12-01

    This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  18. Verifying and Postprocesing the Ensemble Spread-Error Relationship

    NASA Astrophysics Data System (ADS)

    Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli

    2013-04-01

    With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present

  19. Error control for reliable digital data transmission and storage systems

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, R. H.

    1985-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.

  20. Error detection and correction unit with built-in self-test capability for spacecraft applications

    NASA Technical Reports Server (NTRS)

    Timoc, Constantin

    1990-01-01

    The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.

  1. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  2. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    PubMed

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  3. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  4. The relative importance of random error and observation frequency in detecting trends in upper tropospheric water vapor

    NASA Astrophysics Data System (ADS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-11-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  5. Remote Sensing Applications to Water Quality Management in Florida

    NASA Astrophysics Data System (ADS)

    Lehrter, J. C.; Schaeffer, B. A.; Hagy, J.; Spiering, B.; Barnes, B.; Hu, C.; Le, C.; McEachron, L.; Underwood, L. W.; Ellis, C.; Fisher, B.

    2013-12-01

    Optical datasets from estuarine and coastal systems are increasingly available for remote sensing algorithm development, validation, and application. With validated algorithms, the data streams from satellite sensors can provide unprecedented spatial and temporal data for local and regional coastal water quality management. Our presentation will highlight two recent applications of optical data and remote sensing to water quality decision-making in coastal regions of the state of Florida; (1) informing the development of estuarine and coastal nutrient criteria for the state of Florida and (2) informing the rezoning of the Florida Keys National Marine Sanctuary. These efforts involved building up the underlying science to demonstrate the applicability of satellite data as well as an outreach component to educate decision-makers about the use, utility, and uncertainties of remote sensing data products. Scientific developments included testing existing algorithms and generating new algorithms for water clarity and chlorophylla in case II (CDOM or turbidity dominated) estuarine and coastal waters and demonstrating the accuracy of remote sensing data products in comparison to traditional field based measurements. Including members from decision-making organizations on the research team and interacting with decision-makers early and often in the process were key factors for the success of the outreach efforts and the eventual adoption of satellite data into the data records and analyses used in decision-making. Florida coastal water bodies (black boxes) for which remote sensing imagery were applied to derive numeric nutrient criteria and in situ observations (black dots) used to validate imagery. Florida ocean color applied to development of numeric nutrient criteria

  6. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  7. The role of NASA's Water Resources applications area in improving access to water quality-related information and water resources management

    NASA Astrophysics Data System (ADS)

    Lee, C. M.

    2016-02-01

    The NASA Applied Sciences Program plays a unique role in facilitating access to remote sensing-based water information derived from US federal assets towards the goal of improving science and evidence-based decision-making in water resources management. The Water Resources Application Area within NASA Applied Sciences works specifically to develop and improve water data products to support improved management of water resources, with partners who are faced with real-world constraints and conditions including cost and regulatory standards. This poster will highlight the efforts and collaborations enabled by this program that have resulted in integration of remote sensing-based information for water quality modeling and monitoring within an operational context.

  8. The role of NASA's Water Resources applications area in improving access to water quality-related information and water resources management

    NASA Astrophysics Data System (ADS)

    Lee, C. M.

    2016-12-01

    The NASA Applied Sciences Program plays a unique role in facilitating access to remote sensing-based water information derived from US federal assets towards the goal of improving science and evidence-based decision-making in water resources management. The Water Resources Application Area within NASA Applied Sciences works specifically to develop and improve water data products to support improved management of water resources, with partners who are faced with real-world constraints and conditions including cost and regulatory standards. This poster will highlight the efforts and collaborations enabled by this program that have resulted in integration of remote sensing-based information for water quality modeling and monitoring within an operational context.

  9. Eccentricity error identification and compensation for high-accuracy 3D optical measurement.

    PubMed

    He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z

    2013-07-01

    The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation.

  10. Distance error correction for time-of-flight cameras

    NASA Astrophysics Data System (ADS)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  11. Mechanism of ion adsorption to aqueous interfaces: Graphene/water vs. air/water.

    PubMed

    McCaffrey, Debra L; Nguyen, Son C; Cox, Stephen J; Weller, Horst; Alivisatos, A Paul; Geissler, Phillip L; Saykally, Richard J

    2017-12-19

    The adsorption of ions to aqueous interfaces is a phenomenon that profoundly influences vital processes in many areas of science, including biology, atmospheric chemistry, electrical energy storage, and water process engineering. Although classical electrostatics theory predicts that ions are repelled from water/hydrophobe (e.g., air/water) interfaces, both computer simulations and experiments have shown that chaotropic ions actually exhibit enhanced concentrations at the air/water interface. Although mechanistic pictures have been developed to explain this counterintuitive observation, their general applicability, particularly in the presence of material substrates, remains unclear. Here we investigate ion adsorption to the model interface formed by water and graphene. Deep UV second harmonic generation measurements of the SCN - ion, a prototypical chaotrope, determined a free energy of adsorption within error of that for air/water. Unlike for the air/water interface, wherein repartitioning of the solvent energy drives ion adsorption, our computer simulations reveal that direct ion/graphene interactions dominate the favorable enthalpy change. Moreover, the graphene sheets dampen capillary waves such that rotational anisotropy of the solute, if present, is the dominant entropy contribution, in contrast to the air/water interface.

  12. Design of a water-powered DTH hammer for deep drilling application

    NASA Astrophysics Data System (ADS)

    Cho, Min Jae; Kim, Donguk; Oh, Joo Young; Yook, Se-Jin; Kim, Young Won

    2017-11-01

    A DTH (Down-the-hole) hammer powered by highly pressurized fluid is a drilling tool using the motion of percussion of a drill bit. In retrospect, a DTH by using compressed air as a power source has been widely used in drilling industries such as applications of mining, geothermal etc. On the other hand, another type of a DTH that uses pressurized water, called a water hammer, has recently seen deep drilling applications, while it has been rarely investigated. In this study, we designed a water-powered DTH hammer which mainly consists of several components such as a piston, a poppet valve, a cap and a bit for deep drilling applications. We optimized the components of the hammer on the basis of the results of 1D analysis using commercial software of AMESIM. An experimental study has been also conducted to investigate a performance of the designed water hammer. We measured a pressure distribution inside the hammer system as a function of time, and it thus estimates a frequency of impaction of the bit, which has been also analyzed in frequency domain. In addition, some important parameters have been discussed in conjunction with a limitation of impaction frequency as input pressure. We believe that this study provides design rules of a water-based DTH for deep drilling applications. This work is supported by KITECH of Korean government.

  13. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    PubMed

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  14. Rapid Response Flood Water Mapping

    NASA Technical Reports Server (NTRS)

    Policelli, Fritz; Brakenridge, G. R.; Coplin, A.; Bunnell, M.; Wu, L.; Habib, Shahid; Farah, H.

    2010-01-01

    Since the beginning of operation of the MODIS instrument on the NASA Terra satellite at the end of 1999, an exceptionally useful sensor and public data stream have been available for many applications including the rapid and precise characterization of terrestrial surface water changes. One practical application of such capability is the near-real time mapping of river flood inundation. We have developed a surface water mapping methodology based on using only bands 1 (620-672 nm) and 2 (841-890 nm). These are the two bands at 250 m, and the use of only these bands maximizes the resulting map detail. In this regard, most water bodies are strong absorbers of incoming solar radiation at the band 2 wavelength: it could be used alone, via a thresholding procedure, to separate water (dark, low radiance or reflectance pixels) from land (much brighter pixels) (1, 2). Some previous water mapping procedures have in fact used such single band data from this and other sensors that include similar wavelength channels. Adding the second channel of data (band 1), however, allows a band ratio approach which permits sediment-laden water, often relatively light at band 2 wavelengths, to still be discriminated, and, as well, provides some removal of error by reducing the number of cloud shadow pixels that would otherwise be misclassified as water.

  15. ASCERTAINMENT OF ON-ROAD SAFETY ERRORS BASED ON VIDEO REVIEW

    PubMed Central

    Dawson, Jeffrey D.; Uc, Ergun Y.; Anderson, Steven W.; Dastrup, Elizabeth; Johnson, Amy M.; Rizzo, Matthew

    2011-01-01

    Summary Using an instrumented vehicle, we have studied several aspects of the on-road performance of healthy and diseased elderly drivers. One goal from such studies is to ascertain the type and frequency of driving safety errors. Because the judgment of such errors is somewhat subjective, we applied a taxonomy system of 15 general safety error categories and 76 specific safety error types. We also employed and trained professional driving instructors to review the video data of the on-road drives. In this report, we illustrate our rating system on a group of 111 drivers, ages 65 to 89. These drivers made errors in 13 of the 15 error categories, comprising 42 of the 76 error types. A mean (SD) of 35.8 (12.8) safety errors per drive were noted, with 2.1 (1.7) of them being judged as serious. Our methodology may be useful in applications such as intervention studies, and in longitudinal studies of changes in driving abilities in patients with declining cognitive ability. PMID:24273753

  16. (abstract) Experimental Results From Internetworking Data Applications Over Various Wireless Networks Using a Single Flexible Error Control Protocol

    NASA Technical Reports Server (NTRS)

    Kanai, T.; Kramer, M.; McAuley, A. J.; Nowack, S.; Pinck, D. S.; Ramirez, G.; Stewart, I.; Tohme, H.; Tong, L.

    1995-01-01

    This paper describes results from several wireless field trials in New Jersey, California, and Colorado, conducted jointly by researchers at Bellcore, JPL, and US West over the course of 1993 and 1994. During these trials, applications communicated over multiple wireless networks including satellite, low power PCS, high power cellular, packet data, and the wireline Public Switched Telecommunications Network (PSTN). Key goals included 1) designing data applications and an API suited to mobile users, 2) investigating internetworking issues, 3) characterizing wireless networks under various field conditions, and 4) comparing the performance of different protocol mechanisms over the diverse networks and applications. We describe experimental results for different protocol mechanisms and parameters, such as acknowledgment schemes and packet sizes. We show the need for powerful error control mechanisms such as selective acknowledgements and combining data from multiple transmissions. We highlight the possibility of a common protocol for all wireless networks, from micro-cellular PCS to satellite networks.

  17. Validation and Error Characterization for the Global Precipitation Measurement

    NASA Technical Reports Server (NTRS)

    Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.

    2003-01-01

    The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration

  18. Stakeholder Application of NOAA/NWS River Forecasts: Oil and Water?

    NASA Astrophysics Data System (ADS)

    Werner, K.; Averyt, K.; Bardlsey, T.; Owen, G.

    2011-12-01

    The literature strongly suggests that water management seldom uses forecasts for decision making despite the proven skill of the prediction system and the obvious application of these forecasts to mitigate risk. The literature also suggests that forecast usage is motivated most strongly by risk of failure of the water management objectives. In the semi-arid western United States where water demand has grown such that it roughly equals the long term supply, risk of failure has become pervasive. In the Colorado Basin, the US National Weather Service's Colorado Basin River Forecast Center (CBRFC) has partnered with the Western Water Assessment (WWA) and the Climate Assessment for the Southwest (CLIMAS) to develop a toolkit for stakeholder engagement and application of seasonal streamflow predictions. This toolkit has been used to facilitate several meetings both in the Colorado Basin and elsewhere to assess the factors that motivate, deter, and improve the application of forecasts in this region. The toolkit includes idealized (1) scenario exercises where participants are asked to apply forecasts to real world water management problems, (2) web based exercises where participants gain experience with forecasts and other online forecast tools, and (3) surveys that assess respondents' experience with and perceptions of forecasts and climate science. This talk will present preliminary results from this effort as well as how the CBRFC has adopted the results into its stakeholder engagement strategies.

  19. Monitoring robot actions for error detection and recovery

    NASA Technical Reports Server (NTRS)

    Gini, M.; Smith, R.

    1987-01-01

    Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.

  20. Reducing epistemic errors in water quality modelling through high-frequency data and stakeholder collaboration: the case of an industrial spill

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Inman, Alex; Paling, Nick

    2014-05-01

    Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at

  1. 76 FR 52994 - Application for a License To Export Heavy Water

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-24

    ... NUCLEAR REGULATORY COMMISSION Application for a License To Export Heavy Water Pursuant to 10 CFR... (liters). producing an active water). pharmaceutical ingredient known as CTP-499, which incorporates heavy water as the source of deuterium to achieve the hydrogen-deuterium exchange. November 30, 2010 December...

  2. Encoder fault analysis system based on Moire fringe error signal

    NASA Astrophysics Data System (ADS)

    Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu

    2018-02-01

    Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.

  3. StreamStats: A water resources web application

    USGS Publications Warehouse

    Ries, Kernell G.; Guthrie, John G.; Rea, Alan H.; Steeves, Peter A.; Stewart, David W.

    2008-01-01

    . Streamflow measurements are collected systematically over a period of years at partial-record stations to estimate peak-flow or low-flow statistics. Streamflow measurements usually are collected at miscellaneous-measurement stations for specific hydrologic studies with various objectives.StreamStats is a Web-based Geographic Information System (GIS) application that was created by the USGS, in cooperation with Environmental Systems Research Institute, Inc. (ESRI)1, to provide users with access to an assortment of analytical tools that are useful for water-resources planning and management. StreamStats functionality is based on ESRI’s ArcHydro Data Model and Tools, described on the Web at http://resources.arcgis.com/en/communities/hydro/01vn0000000s000000.htm. StreamStats allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection stations and user-selected ungaged sites. It also allows users to identify stream reaches that are upstream and downstream from user-selected sites, and to identify and obtain information for locations along the streams where activities that may affect streamflow conditions are occurring. This functionality can be accessed through a map-based user interface that appears in the user’s Web browser, or individual functions can be requested remotely as Web services by other Web or desktop computer applications. StreamStats can perform these analyses much faster than historically used manual techniques.StreamStats was designed so that each state would be implemented as a separate application, with a reliance on local partnerships to fund the individual applications, and a goal of eventual full national implementation. Idaho became the first state to implement StreamStats in 2003. By mid-2008, 14 states had applications available to the public, and 18 other states were in various stages of implementation.

  4. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  5. Identifying subassemblies by ultrasound to prevent fuel handling error in sodium fast reactors: First test performed in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paumel, Kevin; Lhuillier, Christian

    2015-07-01

    Identifying subassemblies by ultrasound is a method that is being considered to prevent handling errors in sodium fast reactors. It is based on the reading of a code (aligned notches) engraved on the subassembly head by an emitting/receiving ultrasonic sensor. This reading is carried out in sodium with high temperature transducers. The resulting one-dimensional C-scan can be likened to a binary code expressing the subassembly type and number. The first test performed in water investigated two parameters: width and depth of the notches. The code remained legible for notches as thin as 1.6 mm wide. The impact of the depthmore » seems minor in the range under investigation. (authors)« less

  6. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  7. Fault-Tolerant Signal Processing Architectures with Distributed Error Control.

    DTIC Science & Technology

    1985-01-01

    Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and

  8. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  9. Economic impact of medication error: a systematic review.

    PubMed

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Eccentricity error identification and compensation for high-accuracy 3D optical measurement

    PubMed Central

    He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z

    2016-01-01

    The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation. PMID:26900265

  11. National Aeronautics and Space Administration "threat and error" model applied to pediatric cardiac surgery: error cycles precede ∼85% of patient deaths.

    PubMed

    Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S

    2015-02-01

    We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.

  12. Design and scheduling for periodic concurrent error detection and recovery in processor arrays

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent

    1992-01-01

    Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.

  13. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  14. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  15. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  16. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  17. 77 FR 12830 - Pershing County Water Conservation District; Notice of Intent To File License Application, Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... Water Conservation District; Notice of Intent To File License Application, Filing of Pre-Application.... Submitted by: Pershing County Water Conservation District (Pershing County). e. Name of Project: Humboldt... the Commission's regulations. h. Potential Applicant Contact: Bennie Hodges, Pershing County Water...

  18. Quantum error correction for continuously detected errors with any number of error channels per qubit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt

    2004-08-01

    It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.

  19. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    NASA Astrophysics Data System (ADS)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  20. Minimizing Experimental Error in Thinning Research

    Treesearch

    C. B. Briscoe

    1964-01-01

    Many diverse approaches have been made prescribing and evaluating thinnings on an objective basis. None of the techniques proposed hasbeen widely accepted. Indeed. none has been proven superior to the others nor even widely applicable. There are at least two possible reasons for this: none of the techniques suggested is of any general utility and/or experimental error...

  1. FMEA: a model for reducing medical errors.

    PubMed

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  2. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  3. Application of Difference-in-Difference Techniques to the Evaluation of Drought-Tainted Water Conservation Programs.

    ERIC Educational Resources Information Center

    Bamezai, Anil

    1995-01-01

    Some of the threats to internal validity that arise when evaluating the impact of water conservation programs during a drought are illustrated. These include differential response to the drought, self-selection bias, and measurement error. How to deal with these problems when high-quality disaggregate data are available is discussed. (SLD)

  4. 40 CFR 410.30 - Applicability; description of the low water use processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Applicability; description of the low... PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TEXTILE MILLS POINT SOURCE CATEGORY Low Water Use Processing Subcategory § 410.30 Applicability; description of the low water use processing...

  5. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  6. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  7. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  8. 76 FR 9770 - Utah Board of Water Resources Notice of Successive Preliminary Permit Application Accepted for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... Water Resources Notice of Successive Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On February 1, 2011, the Utah Board of Water... reservoir. Applicant Contact: Mr. Eric Millis, Utah Board of Water Resources, 1594 W. North Temple, Salt...

  9. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  10. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  11. Analysis of Medication Error Reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Paul D.; Young, Jonathan; Santell, John

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison,more » and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.« less

  12. Error Estimation of Pathfinder Version 5.3 SST Level 3C Using Three-way Error Analysis

    NASA Astrophysics Data System (ADS)

    Saha, K.; Dash, P.; Zhao, X.; Zhang, H. M.

    2017-12-01

    One of the essential climate variables for monitoring as well as detecting and attributing climate change, is Sea Surface Temperature (SST). A long-term record of global SSTs are available with observations obtained from ships in the early days to the more modern observation based on in-situ as well as space-based sensors (satellite/aircraft). There are inaccuracies associated with satellite derived SSTs which can be attributed to the errors associated with spacecraft navigation, sensor calibrations, sensor noise, retrieval algorithms, and leakages due to residual clouds. Thus it is important to estimate accurate errors in satellite derived SST products to have desired results in its applications.Generally for validation purposes satellite derived SST products are compared against the in-situ SSTs which have inaccuracies due to spatio/temporal inhomogeneity between in-situ and satellite measurements. A standard deviation in their difference fields usually have contributions from both satellite as well as the in-situ measurements. A real validation of any geophysical variable must require the knowledge of the "true" value of the said variable. Therefore a one-to-one comparison of satellite based SST with in-situ data does not truly provide us the real error in the satellite SST and there will be ambiguity due to errors in the in-situ measurements and their collocation differences. A Triple collocation (TC) or three-way error analysis using 3 mutually independent error-prone measurements, can be used to estimate root-mean square error (RMSE) associated with each of the measurements with high level of accuracy without treating any one system a perfectly-observed "truth". In this study we are estimating the absolute random errors associated with Pathfinder Version 5.3 Level-3C SST product Climate Data record. Along with the in-situ SST data, the third source of dataset used for this analysis is the AATSR reprocessing of climate (ARC) dataset for the corresponding

  13. Error correcting circuit design with carbon nanotube field effect transistors

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong

    2018-03-01

    In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.

  14. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    NASA Astrophysics Data System (ADS)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  15. Scale-up of water-based spider silk film casting using a film applicator.

    PubMed

    Agostini, Elisa; Winter, Gerhard; Engert, Julia

    2017-10-30

    Spider silk proteins for applications in drug delivery have attracted an increased interest during the past years. Some possible future medical applications for this biocompatible and biodegradable material are scaffolds for tissue engineering, implantable drug delivery systems and coatings for implants. Recently, we reported on the preparation of water-based spider silk films for drug delivery applications. In the current study, we describe the development of a manufacturing technique for casting larger spider silk films from aqueous solution employing a film applicator. Films were characterized in terms of morphology, water solubility, protein secondary structure, thermal stability, and mechanical properties. Different post-treatments were evaluated (phosphate ions, ethanol, steam sterilization and water vapor) to increase the content of β-sheets thereby achieving water insolubility of the films. Finally, the mechanical properties of the spider silk films were improved by incorporating 2-pyrrolidone as plasticizer. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems

    NASA Astrophysics Data System (ADS)

    Stewart, Michael K.; Morgenstern, Uwe; Gusyev, Maksym A.; Małoszewski, Piotr

    2017-09-01

    Kirchner (2016a) demonstrated that aggregation errors due to spatial heterogeneity, represented by two homogeneous subcatchments, could cause severe underestimation of the mean transit times (MTTs) of water travelling through catchments when simple lumped parameter models were applied to interpret seasonal tracer cycle data. Here we examine the effects of such errors on the MTTs and young water fractions estimated using tritium concentrations in two-part hydrological systems. We find that MTTs derived from tritium concentrations in streamflow are just as susceptible to aggregation bias as those from seasonal tracer cycles. Likewise, groundwater wells or springs fed by two or more water sources with different MTTs will also have aggregation bias. However, the transit times over which the biases are manifested are different because the two methods are applicable over different time ranges, up to 5 years for seasonal tracer cycles and up to 200 years for tritium concentrations. Our virtual experiments with two water components show that the aggregation errors are larger when the MTT differences between the components are larger and the amounts of the components are each close to 50 % of the mixture. We also find that young water fractions derived from tritium (based on a young water threshold of 18 years) are almost immune to aggregation errors as were those derived from seasonal tracer cycles with a threshold of about 2 months.

  17. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  18. Environmental application of nanotechnology: air, soil, and water.

    PubMed

    Ibrahim, Rusul Khaleel; Hayyan, Maan; AlSaadi, Mohammed Abdulhakim; Hayyan, Adeeb; Ibrahim, Shaliza

    2016-07-01

    Global deterioration of water, soil, and atmosphere by the release of toxic chemicals from the ongoing anthropogenic activities is becoming a serious problem throughout the world. This poses numerous issues relevant to ecosystem and human health that intensify the application challenges of conventional treatment technologies. Therefore, this review sheds the light on the recent progresses in nanotechnology and its vital role to encompass the imperative demand to monitor and treat the emerging hazardous wastes with lower cost, less energy, as well as higher efficiency. Essentially, the key aspects of this account are to briefly outline the advantages of nanotechnology over conventional treatment technologies and to relevantly highlight the treatment applications of some nanomaterials (e.g., carbon-based nanoparticles, antibacterial nanoparticles, and metal oxide nanoparticles) in the following environments: (1) air (treatment of greenhouse gases, volatile organic compounds, and bioaerosols via adsorption, photocatalytic degradation, thermal decomposition, and air filtration processes), (2) soil (application of nanomaterials as amendment agents for phytoremediation processes and utilization of stabilizers to enhance their performance), and (3) water (removal of organic pollutants, heavy metals, pathogens through adsorption, membrane processes, photocatalysis, and disinfection processes).

  19. Analytical three-point Dixon method: With applications for spiral water-fat imaging.

    PubMed

    Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G

    2016-02-01

    The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.

  20. Determining particle size and water content by near-infrared spectroscopy in the granulation of naproxen sodium.

    PubMed

    Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter

    2018-03-20

    Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes

  1. 77 FR 19011 - Spartanburg Water System; Notice of Preliminary Permit Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-29

    ... Water System; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On February 6, 2012, Spartanburg Water System... Schneider, General Manager, Spartanburg Water, 200 Commerce Street, P.O. Box 251, Spartanburg, South...

  2. 77 FR 16220 - Spartanburg Water System; Notice of Preliminary Permit Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-20

    ... Water System; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On December 20, 2011, Spartanburg Water System... Contact: Ms. Sue Schneider, General Manager, Spartanburg Water, 200 Commerce Street, P.O. Box 251...

  3. Prediction of final error level in learning and repetitive control

    NASA Astrophysics Data System (ADS)

    Levoci, Peter A.

    Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and

  4. Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method

    USDA-ARS?s Scientific Manuscript database

    Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...

  5. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells.

    PubMed

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-10-14

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  6. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells

    PubMed Central

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-01-01

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412

  7. 48 CFR 22.1015 - Discovery of errors by the Department of Labor.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Discovery of errors by the... REGULATION SOCIOECONOMIC PROGRAMS APPLICATION OF LABOR LAWS TO GOVERNMENT ACQUISITIONS Service Contract Act of 1965, as Amended 22.1015 Discovery of errors by the Department of Labor. If the Department of...

  8. A Case for Soft Error Detection and Correction in Computational Chemistry.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  9. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    NASA Astrophysics Data System (ADS)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  10. Factors affecting nursing students' intention to report medication errors: An application of the theory of planned behavior.

    PubMed

    Ben Natan, Merav; Sharon, Ira; Mahajna, Marlen; Mahajna, Sara

    2017-11-01

    Medication errors are common among nursing students. Nonetheless, these errors are often underreported. To examine factors related to nursing students' intention to report medication errors, using the Theory of Planned Behavior, and to examine whether the theory is useful in predicting students' intention to report errors. This study has a descriptive cross-sectional design. Study population was recruited in a university and a large nursing school in central and northern Israel. A convenience sample of 250 nursing students took part in the study. The students completed a self-report questionnaire, based on the Theory of Planned Behavior. The findings indicate that students' intention to report medication errors was high. The Theory of Planned Behavior constructs explained 38% of variance in students' intention to report medication errors. The constructs of behavioral beliefs, subjective norms, and perceived behavioral control were found as affecting this intention, while the most significant factor was behavioral beliefs. The findings also reveal that students' fear of the reaction to disclosure of the error from superiors and colleagues may impede them from reporting the error. Understanding factors related to reporting medication errors is crucial to designing interventions that foster error reporting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  12. Automated quantification of surface water inundation in wetlands using optical satellite imagery

    USGS Publications Warehouse

    DeVries, Ben; Huang, Chengquan; Lang, Megan W.; Jones, John W.; Huang, Wenli; Creed, Irena F.; Carroll, Mark L.

    2017-01-01

    We present a fully automated and scalable algorithm for quantifying surface water inundation in wetlands. Requiring no external training data, our algorithm estimates sub-pixel water fraction (SWF) over large areas and long time periods using Landsat data. We tested our SWF algorithm over three wetland sites across North America, including the Prairie Pothole Region, the Delmarva Peninsula and the Everglades, representing a gradient of inundation and vegetation conditions. We estimated SWF at 30-m resolution with accuracies ranging from a normalized root-mean-square-error of 0.11 to 0.19 when compared with various high-resolution ground and airborne datasets. SWF estimates were more sensitive to subtle inundated features compared to previously published surface water datasets, accurately depicting water bodies, large heterogeneously inundated surfaces, narrow water courses and canopy-covered water features. Despite this enhanced sensitivity, several sources of errors affected SWF estimates, including emergent or floating vegetation and forest canopies, shadows from topographic features, urban structures and unmasked clouds. The automated algorithm described in this article allows for the production of high temporal resolution wetland inundation data products to support a broad range of applications.

  13. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-09

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  14. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  15. The error in total error reduction.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. 76 FR 67179 - Spartanburg Water System; Notice of Preliminary Permit Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-31

    ... Water System; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On June 23, 2011, Spartanburg Water System (Spartanburg..., General Manager, Spartanburg Water, 200 Commerce Street, P.O. Box 251, Spartanburg, South Carolina 29304...

  17. Spatial distribution of errors associated with multistatic meteor radar

    NASA Astrophysics Data System (ADS)

    Hocking, W. K.

    2018-06-01

    With the recent increase in numbers of small and versatile low-power meteor radars, the opportunity exists to benefit from simultaneous application of multiple systems spaced by only a few hundred km and less. Transmissions from one site can be recorded at adjacent receiving sites using various degrees of forward scatter, potentially allowing atmospheric conditions in the mesopause regions between stations to be diagnosed. This can allow a better spatial overview of the atmospheric conditions at any time. Such studies have been carried out using a small version of such so-called multistatic meteor radars, e.g. Chau et al. (Radio Sci 52:811-828, 2017, https://doi.org/10.1002/2016rs006225 ). These authors were able to also make measurements of vorticity and divergence. However, measurement uncertainties arise which need to be considered in any application of such techniques. Some errors are so severe that they prohibit useful application of the technique in certain locations, particularly for zones at the midpoints of the radars sites. In this paper, software is developed to allow these errors to be determined, and examples of typical errors involved are discussed. The software should be of value to others who wish to optimize their own MMR systems.

  18. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  19. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 9 2012-10-01 2012-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  20. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 9 2013-10-01 2013-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  1. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 9 2014-10-01 2014-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  2. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  3. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 9 2011-10-01 2011-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  4. Characterizing Satellite Rainfall Errors based on Land Use and Land Cover and Tracing Error Source in Hydrologic Model Simulation

    NASA Astrophysics Data System (ADS)

    Gebregiorgis, A. S.; Peters-Lidard, C. D.; Tian, Y.; Hossain, F.

    2011-12-01

    Hydrologic modeling has benefited from operational production of high resolution satellite rainfall products. The global coverage, near-real time availability, spatial and temporal sampling resolutions have advanced the application of physically based semi-distributed and distributed hydrologic models for wide range of environmental decision making processes. Despite these successes, the existence of uncertainties due to indirect way of satellite rainfall estimates and hydrologic models themselves remain a challenge in making meaningful and more evocative predictions. This study comprises breaking down of total satellite rainfall error into three independent components (hit bias, missed precipitation and false alarm), characterizing them as function of land use and land cover (LULC), and tracing back the source of simulated soil moisture and runoff error in physically based distributed hydrologic model. Here, we asked "on what way the three independent total bias components, hit bias, missed, and false precipitation, affect the estimation of soil moisture and runoff in physically based hydrologic models?" To understand the clear picture of the outlined question above, we implemented a systematic approach by characterizing and decomposing the total satellite rainfall error as a function of land use and land cover in Mississippi basin. This will help us to understand the major source of soil moisture and runoff errors in hydrologic model simulation and trace back the information to algorithm development and sensor type which ultimately helps to improve algorithms better and will improve application and data assimilation in future for GPM. For forest and woodland and human land use system, the soil moisture was mainly dictated by the total bias for 3B42-RT, CMORPH, and PERSIANN products. On the other side, runoff error was largely dominated by hit bias than the total bias. This difference occurred due to the presence of missed precipitation which is a major

  5. 77 FR 63302 - San Gabriel Valley Water Company dba, Fontana Water Company; Notice of Application Accepted for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-16

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 14428-000] San Gabriel Valley Water Company dba, Fontana Water Company; Notice of Application Accepted for Filing and Soliciting Comments, Motions To Intervene, Protests, Recommendations, and Terms and Conditions October 10, 2012. Take notice that the following hydroelectric...

  6. Watershed Scale Analysis of Groundwater Surface Water Interactions and Its Application to Conjunctive Management under Climatic and Anthropogenic Stresses over the US Sunbelt

    NASA Astrophysics Data System (ADS)

    Seo, Seung Beom

    Although water is one of the most essential natural resources, human activities have been exerting pressure on water resources. In order to reduce these stresses on water resources, two key issues threatening water resources sustainability - interaction between surface water and groundwater resources and groundwater withdrawal impacts of streamflow depletion - were investigated in this study. First, a systematic decomposition procedure was proposed for quantifying the errors arising from various sources in the model chain in projecting the changes in hydrologic attributes using near-term climate change projections. Apart from the unexplained changes by GCMs, the process of customizing GCM projections to watershed scale through a model chain - spatial downscaling, temporal disaggregation and hydrologic model - also introduces errors, thereby limiting the ability to explain the observed changes in hydrologic variability. Towards this, we first propose metrics for quantifying the errors arising from different steps in the model chain in explaining the observed changes in hydrologic variables (streamflow, groundwater). The proposed metrics are then evaluated using a detailed retrospective analyses in projecting the changes in streamflow and groundwater attributes in four target basins that span across a diverse hydroclimatic regimes over the US Sunbelt. Our analyses focused on quantifying the dominant sources of errors in projecting the changes in eight hydrologic variables - mean and variability of seasonal streamflow, mean and variability of 3-day peak seasonal streamflow, mean and variability of 7-day low seasonal streamflow and mean and standard deviation of groundwater depth - over four target basins using an Penn state Integrated Hydrologic Model (PIHM) between the period 1956-1980 and 1981-2005. Retrospective analyses show that small/humid (large/arid) basins show increased (reduced) uncertainty in projecting the changes in hydrologic attributes. Further

  7. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  8. A Unified Approach to Measurement Error and Missing Data: Details and Extensions

    ERIC Educational Resources Information Center

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…

  9. 76 FR 56427 - California Department of Water Resources; Notice of Application Accepted for Filing, Soliciting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-13

    ... Department of Water Resources; Notice of Application Accepted for Filing, Soliciting Comments, Motions To....: 2100-175. c. Date Filed: August 17, 2011. d. Applicant: California Department of Water Resources (CDWR... Request: The California Department of Water Resources (CDWR), licensee for the Feather River Hydroelectric...

  10. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    PubMed

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged <18 years. Of the 310 pediatric chemotherapy error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  11. Radiative flux and forcing parameterization error in aerosol-free clear skies

    DOE PAGES

    Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...

    2015-07-03

    This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less

  12. Conditional Standard Errors of Measurement for Scale Scores.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  13. Error analysis and experiments of attitude measurement using laser gyroscope

    NASA Astrophysics Data System (ADS)

    Ren, Xin-ran; Ma, Wen-li; Jiang, Ping; Huang, Jin-long; Pan, Nian; Guo, Shuai; Luo, Jun; Li, Xiao

    2018-03-01

    The precision of photoelectric tracking and measuring equipment on the vehicle and vessel is deteriorated by the platform's movement. Specifically, the platform's movement leads to the deviation or loss of the target, it also causes the jitter of visual axis and then produces image blur. In order to improve the precision of photoelectric equipment, the attitude of photoelectric equipment fixed with the platform must be measured. Currently, laser gyroscope is widely used to measure the attitude of the platform. However, the measurement accuracy of laser gyro is affected by its zero bias, scale factor, installation error and random error. In this paper, these errors were analyzed and compensated based on the laser gyro's error model. The static and dynamic experiments were carried out on a single axis turntable, and the error model was verified by comparing the gyro's output with an encoder with an accuracy of 0.1 arc sec. The accuracy of the gyroscope has increased from 7000 arc sec to 5 arc sec for an hour after error compensation. The method used in this paper is suitable for decreasing the laser gyro errors in inertial measurement applications.

  14. Impact of errors in short wave radiation and its attenuation on modeled upper ocean heat content

    DTIC Science & Technology

    Photosynthetically available radiation (PAR) and its attenuation with the depth represent a forcing (source) term in the governing equation for the...and vertical attenuation of PAR have on the upper ocean model heat content. In the Monterey Bay area, we show that with a decrease in water clarity...attenuation coefficient. For Jerlov’s type IA water (attenuation coefficient is 0.049 m1), the relative error in surface PAR introduces an error

  15. Accounting for measurement error in log regression models with applications to accelerated testing.

    PubMed

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  16. 77 FR 70425 - Tarrant Regional Water District; Notice of Application for Amendment of Exemption and Soliciting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... Regional Water District; Notice of Application for Amendment of Exemption and Soliciting Comments, Motions.... b. Project No.: P-13946-001. c. Date Filed: October 26, 2012. d. Applicant: Tarrant Regional Water... Regional Water District's water distribution system located in Tarrant County, Texas. The project does not...

  17. Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Tian, Yudong

    2011-01-01

    Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.

  18. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    NASA Astrophysics Data System (ADS)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  19. Development of a web application for water resources based on open source software

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.

    2014-01-01

    This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.

  20. Propagation of error from parameter constraints in quantitative MRI: Example application of multiple spin echo T2 mapping.

    PubMed

    Lankford, Christopher L; Does, Mark D

    2018-02-01

    Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math

  2. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  3. Innovative reuse of drinking water sludge in geo-environmental applications.

    PubMed

    Caniani, D; Masi, S; Mancini, I M; Trulli, E

    2013-06-01

    In recent years, the replacement of natural raw materials with new alternative materials, which acquire an economic, energetic and environmental value, has gained increasing importance. The considerable consumption of water has favoured the increase in the number of drinking water treatment plants and, consequently, the production of drinking water sludge. This paper proposes a protocol of analyses capable of evaluating chemical characteristics of drinking water sludge from surface water treatment plants. Thereby we are able to assess their possible beneficial use for geo-environmental applications, such as the construction of barrier layers for landfill and for the formation of "bio-soils", when mixed with the stabilized organic fraction of municipal solid waste. This paper reports the results of a study aimed at evaluating the quality and environmental aspects of reconstructed soils ("bio-soil"), which are used in much greater quantities than the usual standard, for "massive" applications in environmental actions such as the final cover of landfills. The granulometric, chemical and physical analyses of the sludge and the leaching test on the stabilized organic fraction showed the suitability of the proposed materials for reuse. The study proved that the reuse of drinking water sludge for the construction of barrier layers and the formation of "bio-soils" reduces the consumption of natural materials, the demand for landfill volumes, and offers numerous technological advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    PubMed Central

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  5. Final Rule: Definition of “Waters of the United States” – Addition of Applicability Date to 2015 Clean Water Rule

    EPA Pesticide Factsheets

    Link to the final rule of the applicability date of the clean water rule, The 2015 Rule will not be applicable until two years following publication of the applicability date rule in the Federal Register.

  6. Geometric error characterization and error budgets. [thematic mapper

    NASA Technical Reports Server (NTRS)

    Beyer, E.

    1982-01-01

    Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.

  7. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong

    Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less

  8. Strategies to increase patient safety in Hemodialysis: Application of the modal analysis system of errors and effects (FEMA system).

    PubMed

    Arenas Jiménez, María Dolores; Ferre, Gabriel; Álvarez-Ude, Fernando

    as AEs could improve safety by facilitating the implementation of preventive measures. The application of the FMEA system allows stratifying real and potential errors in dialysis units and acting with the appropriate degree of urgency, developing and implementing the necessary preventive and improvement measures. Copyright © 2017 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.

  9. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  10. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    PubMed Central

    Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija

    2018-01-01

    The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two

  11. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  12. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  13. Errors in otology.

    PubMed

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  14. Reed Solomon codes for error control in byte organized computer memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  15. International Test Comparisons: Reviewing Translation Error in Different Source Language-Target Language Combinations

    ERIC Educational Resources Information Center

    Zhao, Xueyu; Solano-Flores, Guillermo; Qian, Ming

    2018-01-01

    This article addresses test translation review in international test comparisons. We investigated the applicability of the theory of test translation error--a theory of the multidimensionality and inevitability of test translation error--across source language-target language combinations in the translation of PISA (Programme of International…

  16. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  17. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Visuomotor adaptation needs a validation of prediction error by feedback error

    PubMed Central

    Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle

    2014-01-01

    The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly

  19. Land application of sugar beet by-products: effects on runoff and percolating water quality.

    PubMed

    Kumar, Kuldip; Rosen, Carl J; Gupta, Satish C; McNearney, Matthew

    2009-01-01

    Water quality concerns, including greater potential for nutrient transport to surface waters resulting in eutrophication and nutrient leaching to ground water, exist when agricultural or food processing industry wastes and by-products are land applied. Plot- and field-scale studies were conducted to evaluate the effects of sugar beet by-products on NO3-N and P losses and biochemical oxygen demand (BOD) in runoff and NO3-N concentrations in percolating waters. In the runoff plot study, treatments in the first year included two rates (224 and 448 Mg ha(-1) fresh weight) of pulp and spoiled beets and a nonfertilized control. In the second year, no by-products were applied on the treated plots, the control treatment was fertilized with N fertilizer, and an additional treatment was added as a nonfertilized control in buffer areas. Wheat (Triticum aestivum L.) was grown in the year of by-product application and sugar beet (Beta vulgaris L.) in the following year. In the percolation field study, the treatments were the control, pulp (224 Mg ha(-)(1)), and spoiled beets (224 Mg ha(-1)). Results from the runoff plot showed that both by-products caused immobilization of soil inorganic N and thus reduced NO3-N losses in runoff and soil waters during the first growing season. There was some risk of NO3-N exceeding the drinking water limit of 10 mg L(-1), especially between the period of wheat harvest and soil freezing in fall when pulp was applied at 448 Mg ha(-1). The field-scale study showed that by-product application at 224 Mg ha(-1) did not result in increased ground water NO3-N concentrations. Application of spoiled beets at both rates caused significantly higher BODs in runoff in the first year of application. The concentrations of total and soluble reactive P (SRP) were also higher from both rates of spoiled beet application and from the higher application rate of pulp during the 2-yr study period. These high BODs and total P and SRP concentrations in runoff waters

  20. Upstream water resource management to address downstream pollution concerns: A policy framework with application to the Nakdong River basin in South Korea

    NASA Astrophysics Data System (ADS)

    Yoon, Taeyeon; Rhodes, Charles; Shah, Farhed A.

    2015-02-01

    An empirical framework for assisting with water quality management is proposed that relies on open-source hydrologic data. Such data are measured periodically at fixed water stations and commonly available in time-series form. To fully exploit the data, we suggest that observations from multiple stations should be combined into a single long-panel data set, and an econometric model developed to estimate upstream management effects on downstream water quality. Selection of the model's functional form and explanatory variables would be informed by rating curves, and idiosyncrasies across and within stations handled in an error term by testing contemporary correlation, serial correlation, and heteroskedasticity. Our proposed approach is illustrated with an application to the Nakdong River basin in South Korea. Three alternative policies to achieve downstream BOD level targets are evaluated: upstream water treatment, greater dam discharge, and development of a new water source. Upstream water treatment directly cuts off incoming pollutants, thereby presenting the smallest variation in its downstream effects on BOD levels. Treatment is advantageous when reliability of water quality is a primary concern. Dam discharge is a flexible tool, and may be used strategically during a low-flow season. We consider development of a new water corridor from an extant dam as our third policy option. This turns out to be the most cost-effective way for securing lower BOD levels in the downstream target city. Even though we consider a relatively simple watershed to illustrate the usefulness of our approach, it can be adapted easily to analyze more complex upstream-downstream issues.

  1. Propagation of angular errors in two-axis rotation systems

    NASA Astrophysics Data System (ADS)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  2. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.

  3. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  4. Long-term reclaimed water application effects on phosphorus leaching potential in rapid infiltration basins.

    PubMed

    Moura, Daniel R; Silveira, Maria L; O'Connor, George A; Wise, William R

    2011-09-01

    Rapid infiltration basins (RIBs) are effective tools for wastewater treatment and groundwater recharge, but continuous application of wastewater can increase soil P concentrations and subsequently impact groundwater quality. The objectives of this study were to (1) investigate the effects of reclaimed water infiltration rate and "age" of RIBs on soil P concentrations at various depths, and (2) estimate the degree (percentage) of sorption equilibrium reached between effluent P and soil attained during reclaimed water application to different RIBs. The study was conducted in four contrasting cells of a RIB system with up to a 25 year history of secondary wastewater application. Soil samples were collected from 0 to 300 cm depth at 30 cm intervals and analyzed for water extractable phosphorus (WEP) and oxalate extractable P, Al, and Fe concentrations. Water extractable P and P saturation ratio (PSR) values were generally greater in the cells receiving reclaimed water compared to control soils, suggesting that reclaimed water P application can increase soil P concentrations and the risk of P movement to greater depths. Differences between treatment and control samples were more evident in cells with longer histories of reclaimed water application due to greater P loading. Data also indicated considerable spatial variability in WEP concentrations and PSR values, especially within cells from RIBs characterized by fast infiltration rates. This occurs because wastewater-P flows through surface soils much faster than the minimum time required for sorption equilibrium to occur. Studies should be conducted to investigate soil P saturation at deeper depths to assess possible groundwater contamination.

  5. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  6. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  7. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  8. Applications of remote sensing technology to U.S. Water resource management

    NASA Technical Reports Server (NTRS)

    Weber, J. D.

    1982-01-01

    Applications of Landsat data to assessing the available water supply in the U.S. for agricultural puposes as a program of the NASA Office of Space Science and Applications are described. Snow melt runoff predictions are performed with multitemporal Landsat imagery to measure the extent of mountain snow packs in the Rockies in order to produce stream flow forecast models. The data is used for decisions of which crops to plant, based on irrigation water which will be accessible in eleven western U.S. states. Imagery has tracked the growth of irrigated lands watered from the Ogallala aquifer in the central U.S., and is providing a data base for calculating the aquifer depletion rates. Studies are preceeding in temporally mapping the kinds of vegetation growing in the Suwannee Sound in Florida to monitor the intrusion of salt water, which would reduce the River's usefulness for irrigation. Finally, capabilities of the Thematic Mapper are reviewed.

  9. A Kalman filter for a two-dimensional shallow-water model

    NASA Technical Reports Server (NTRS)

    Parrish, D. F.; Cohn, S. E.

    1985-01-01

    A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.

  10. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.

    PubMed

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-06-01

    Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error

  11. Application of the two-film model to the volatilization of acetone and t-butyl alcohol from water as a function of temperature

    USGS Publications Warehouse

    Rathbun, R.E.; Tai, D.Y.

    1988-01-01

    the gas-film resistance approaches the overall resistance in value, the calculated liquid-film coefficient becomes extremely sensitive to errors in the Henry's law constant. The negative coefficients were attributed to this sensitivity and to errors in the estimated Henry's law constants. Liquid-film coefficients for the absorption of oxygen were correlated with the stirrer Reynolds number and the Schmidt number. Application of this correlation with the experimental conditions and a molecular-diffusion coefficient adjustment resulted in values of the liquid-film coefficients for both acetone and t-butyl alcohol within the range expected for all three mixing conditions. Comparison of Henry's law constants calculated from these film coefficients and the experimental data with the constants calculated from literature data showed that the differences were small relative to the errors reported in the literature as typical for the measurement or estimation of Henry's law constants for hydrophilic compounds such as ketones and alcohols. Temperature dependence of the mass-transfer coefficients was expressed in two forms. The first, based on thermodynamics, assumed the coefficients varied as the exponential of the reciprocal absolute temperature. The second empirical approach assumed the coefficients varied as the exponential of the absolute temperature. Both of these forms predicted the temperature dependence of the experimental mass-transfer coefficients with little error for most of the water temperature range likely to be found in streams and rivers. Liquid-film and gas-film coefficients for acetone and t-butyl alcohol were similar in value. However, depending on water mixing conditions, overall mass-transfer coefficients for acetone were from two to four times larger than the coefficients for t-butyl alcohol. This difference in behavior of the coefficients resulted because the Henry's law constant for acetone was about three times larger than that of

  12. Biological Treatment of Drinking Water: Applications, Advantages and Disadvantages

    EPA Science Inventory

    The fundamentals of biological treatment are presented to an audience of state drinking water regulators. The presentation covers definitions, applications, the basics of bacterial metabolism, a discussion of treatment options, and the impact that implementation of these options...

  13. Thermal error analysis and compensation for digital image/volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  14. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  15. Preliminary characterization of a water vaporizer for resistojet applications

    NASA Technical Reports Server (NTRS)

    Morren, W. Earl

    1992-01-01

    A series of tests was conducted to explore the characteristics of a water vaporizer intended for application to resistojet propulsion systems. The objectives of these tests were to (1) observe the effect of orientation with respect to gravity on vaporizer stability, (2) characterize vaporizer efficiency and outlet conditions over a range of flow rates, and (3) measure the thrust performance of a vaporizer/resistojet thruster assembly. A laboratory model of a forced-flow, once-through water vaporizer employing a porous heat exchange medium was built and characterized over a range of flow rates and power levels of interest for application to water resistojets. In a test during which the vaporizer was rotated about a horizontal axis normal to its own axis, the outlet temperature and mass flow rate through the vaporizer remained steady. Throttlability to 30 percent of the maximum flow rate tested was demonstrated. The measured thermal efficiency of the vaporizer was near 0.9 for all tests. The water vaporizer was integrated with an engineering model multipropellant resistojet. Performance of the vaporizer/thruster assembly was measured over a narrow range of operating conditions. The maximum specific impulse measured was 234 s at a mass flow rate and specific power level (vaporizer and thruster combined) of 154 x 10(exp-6)kg/s and 6.8 MJ/kg, respectively.

  16. Repeat-aware modeling and correction of short read errors.

    PubMed

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  17. Water Calibration Measurements for Neutron Radiography: Application to Water Content Quantification in Porous Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Misun; Bilheux, Hassina Z; Voisin, Sophie

    2013-04-01

    Using neutron radiography, the measurement of water thickness was performed using aluminum (Al) water calibration cells at the High Flux Isotope Reactor (HFIR) Cold-Guide (CG) 1D neutron imaging facility at Oak Ridge National Laboratory, Oak Ridge, TN, USA. Calibration of water thickness is an important step to accurately measure water contents in samples of interest. Neutron attenuation by water does not vary linearly with thickness mainly due to beam hardening and scattering effects. Transmission measurements for known water thicknesses in water calibration cells allow proper correction of the underestimation of water content due to these effects. As anticipated, strong scatteringmore » effects were observed for water thicknesses greater than 2 mm when the water calibration cells were positioned close to the face of the detector / scintillator (0 and 2.4 cm away, respectively). The water calibration cells were also positioned 24 cm away from the detector face. These measurements resulted in less scattering and this position (designated as the sample position) was used for the subsequent experimental determination of the neutron attenuation coefficient for water. Neutron radiographic images of moist Flint sand in rectangular and cylindrical containers acquired at the sample position were used to demonstrate the applicability of the water calibration. Cumulative changes in the water volumes within the sand columns during monotonic drainage determined by neutron radiography were compared with those recorded by direct reading from a burette connected to a hanging water column. In general, the neutron radiography data showed very good agreement with those obtained volumetrically using the hanging water-column method. These results allow extension of the calibration equation to the quantification of unknown water contents within other samples of porous media.« less

  18. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  19. Magnesium-Based Micromotors: Water-Powered Propulsion, Multifunctionality, and Biomedical and Environmental Applications.

    PubMed

    Chen, Chuanrui; Karshalev, Emil; Guan, Jianguo; Wang, Joseph

    2018-06-01

    The new capabilities and functionalities of synthetic micro/nanomotors open up considerable opportunities for diverse environmental and biomedical applications. Water-powered micromachines are particularly attractive for realizing many of these applications. Magnesium-based motors directly use water as fuel to generate hydrogen bubbles for their propulsion, eliminating the requirement of common toxic fuels. This Review highlights the development of new Mg-based micromotors and discusses the chemistry that makes it extremely attractive for micromotor applications. Understanding these Mg properties and its transient nature is essential for controlling the propulsion efficiency, lifetime, and overall performance. The unique and attractive behavior of Mg offers significant advantages, including efficient water-powered movement, remarkable biocompatibility, controlled degradation, convenient functionalization, and built-in acid neutralization ability, and has paved the way for multifunctional micromachines for diverse real-life applications, including operation in living animals. A wide range of such Mg motor-based applications, including the detection and destruction of environmental threats, effective in-vivo cargo delivery, and autonomous release, have been demonstrated. In conclusion, the current challenges, future opportunities, and performance improvements of the Mg-based micromotors are discussed. With continuous innovation and attention to key challenges, it is expected that Mg-based motors will have a profound impact on diverse biomedical and environmental applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. SARAL/Altika for inland water: current and potential applications

    NASA Astrophysics Data System (ADS)

    Tarpanelli, Angelica; Brocca, Luca; Barbetta, Silvia; Moramarco, Tommaso; Santos da Silva, Joécila; Calmant, Stephane

    2015-04-01

    Although representing less than 1% of the total amount of water on Earth the freshwater is essential for terrestrial life and human needs. Over one third of the world's population is not served by adequate supplies of clean water and for this reason freshwater wars are becoming one of the most pressing environmental issues exacerbating the already difficult tensions between the riparian nations. Notwithstanding the foregoing, we have surprisingly poor knowledge of the spatial and temporal dynamics of surface discharge. In-situ gauging networks quantify the instantaneous water volume in the main river channels but provide few information about the spatial dynamics of surface water extent, such as floodplain flows and the dynamics of wetlands. The growing reduction of hydrometric monitoring networks over the world, along with the inaccessibility of many remote areas and the difficulties for data sharing among developing countries feed the need to develop new procedures for river discharge estimation based on remote sensing technology. The major challenge in this case is the possibility of using Earth Observation data without ground measurements. Radar altimeters are a valuable tool to retrieve hydrological information from space such as water level of inland water. More than a decade of research on the application of radar altimetry has demonstrated its advantages also for monitoring continental water, providing global coverage and regular temporal sampling. The high accuracy of altimetry data provided by the latest spatial missions and the convincing results obtained in the previous applications suggest that these data may be employed for hydraulic/hydrological applications as well. If used in synergy with the modeling, the potential benefits of the altimetry measurements can grow significantly. The new SARAL French-Indian mission, providing improvements in terms of vertical accuracy and spatial resolution of the onboard altimeter Altika, can offer a great

  1. 40 CFR 410.30 - Applicability; description of the low water use processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... water use processing subcategory. 410.30 Section 410.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TEXTILE MILLS POINT SOURCE CATEGORY Low Water Use Processing Subcategory § 410.30 Applicability; description of the low water use processing...

  2. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... (3) A State eligibility error does not result from the State's verification of an applicant's self-declaration or self-certification of eligibility for, and the correct amount of, medical assistance or child... verification. (2) The difference in payment between what the State paid (as adjusted within improper payment...

  3. Geometric Error Analysis in Applied Calculus Problem Solving

    ERIC Educational Resources Information Center

    Usman, Ahmed Ibrahim

    2017-01-01

    The paper investigates geometric errors students made as they tried to use their basic geometric knowledge in the solution of the Applied Calculus Optimization Problem (ACOP). Inaccuracies related to the drawing of geometric diagrams (visualization skills) and those associated with the application of basic differentiation concepts into ACOP…

  4. Bacterial contamination of tile drainage water and shallow groundwater under different application methods of liquid swine manure.

    PubMed

    Samarajeewa, A D; Glasauer, S M; Lauzon, J D; O'Halloran, I P; Parkin, Gary W; Dunfield, K E

    2012-05-01

    A 2 year field experiment evaluated liquid manure application methods on the movement of manure-borne pathogens (Salmonella sp.) and indicator bacteria (Escherichia coli and Clostridium perfringens) to subsurface water. A combination of application methods including surface application, pre-application tillage, and post-application incorporation were applied in a randomized complete block design on an instrumented field site in spring 2007 and 2008. Tile and shallow groundwater were sampled immediately after manure application and after rainfall events. Bacterial enumeration from water samples showed that the surface-applied manure resulted in the highest concentration of E. coli in tile drainage water. Pre-tillage significantly (p < 0.05) reduced the movement of manure-based E. coli and C. perfringens to tile water and to shallow groundwater within 3 days after manure application (DAM) in 2008 and within 10 DAM in 2007. Pre-tillage also decreased the occurrence of Salmonella sp. in tile water samples. Indicator bacteria and pathogens reached nondetectable levels within 50 DAM. The results suggest that tillage before application of liquid swine manure can minimize the movement of bacteria to tile and groundwater, but is effective only for the drainage events immediately after manure application or initial rainfall-associated drainage flows. Furthermore, the study highlights the strong association between bacterial concentrations in subsurface waters and rainfall timing and volume after manure application.

  5. Influence of material surface on the scanning error of a powder-free 3D measuring system.

    PubMed

    Kurz, Michael; Attin, Thomas; Mehl, Albert

    2015-11-01

    This study aims to evaluate the accuracy of a powder-free three-dimensional (3D) measuring system (CEREC Omnicam, Sirona), when scanning the surface of a material at different angles. Additionally, the influence of water was investigated. Nine different materials were combined with human tooth surface (enamel) to create n = 27 specimens. These materials were: Controls (InCoris TZI and Cerec Guide Bloc), ceramics (Vitablocs® Mark II and IPS Empress CAD), metals (gold and amalgam) and composites (Tetric Ceram, Filtek Supreme A2B and A2E). The highly polished samples were scanned at different angles with and without water. The 216 scans were then analyzed and descriptive statistics were obtained. The height difference between the tooth and material surfaces, as measured with the 3D scans, ranged from 0.83 μm (±2.58 μm) to -14.79 μm (±3.45 μm), while the scan noise on the materials was between 3.23 μm (±0.79 μm) and 14.24 μm (±6.79 μm) without considering the control groups. Depending on the thickness of the water film, measurement errors in the order of 300-1,600 μm could be observed. The inaccuracies between the tooth and material surfaces, as well as the scan noise for the materials, were within the range of error for measurements used for conventional impressions and are therefore negligible. The presence of water, however, greatly affects the scan. The tested powder-free 3D measuring system can safely be used to scan different material surfaces without the prior application of a powder, although drying of the surface prior to scanning is highly advisable.

  6. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.; ,

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  7. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  8. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

    NASA Astrophysics Data System (ADS)

    Amoush, Ahmad

    The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm

  9. Error Tracking System

    EPA Pesticide Factsheets

    Error Tracking System is a database used to store & track error notifications sent by users of EPA's web site. ETS is managed by OIC/OEI. OECA's ECHO & OEI Envirofacts use it. Error notifications from EPA's home Page under Contact Us also uses it.

  10. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  11. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  12. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  13. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  14. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  15. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  16. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  17. Error propagation in energetic carrying capacity models

    USGS Publications Warehouse

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  18. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  19. Assessing the urban water balance: the Urban Water Flow Model and its application in Cyprus.

    PubMed

    Charalambous, Katerina; Bruggeman, Adriana; Lange, Manfred A

    2012-01-01

    Modelling the urban water balance enables the understanding of the interactions of water within an urban area and allows for better management of water resources. However, few models today provide a comprehensive overview of all water sources and uses. The objective of the current paper was to develop a user-friendly tool that quantifies and visualizes all water flows, losses and inefficiencies in urban environments. The Urban Water Flow Model was implemented in a spreadsheet and includes a water-savings application that computes the contributions of user-selected saving options to the overall water balance. The model was applied to the coastal town of Limassol, Cyprus, for the hydrologic years 2003/04-2008/09. Data were collected from the different authorities and hydrologic equations and estimations were added to complete the balance. Average precipitation was 363 mm/yr, amounting to 25.4 × 10(6)m(3)/yr, more than double the annual potable water supply to the town. Surface runoff constituted 29.6% of all outflows, while evapotranspiration from impervious areas was 21.6%. Possible potable water savings for 2008/09 were estimated at 5.3 × 10(3) m(3), which is 50% of the total potable water provided to the area. This saving would also result in a 6% reduction of surface runoff.

  20. Injecting Errors for Testing Built-In Test Software

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  1. IPTV multicast with peer-assisted lossy error control

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  2. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  3. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day

  4. ERTS-1 applications in hydrology and water resources

    NASA Technical Reports Server (NTRS)

    Salomonson, V. V.; Rango, A.

    1973-01-01

    After having been in orbit for less than one year, the Earth Resources Technology Satellite (ERTS-1) has shown that it provides very applicable data for more effective monitoring and management of surface water features over the globe. Mapping flooded areas, snowcover, and wetlands and surveying the size, type, and response of glaciers to climate are among the specific areas where ERTS-1 data were applied. In addition the data collection system has proven to be a reliable tool for gathering hydrologic data from remote regions. Turbidity variations in lakes and rivers were also observed and related to shoreline erosion, industrial plant effluent, and overall water quality.

  5. NASA's Contribution to Water Research, Applications and Capacity Building in the America's

    NASA Astrophysics Data System (ADS)

    Toll, D. L.; Searby, N. D.; Doorn, B.; Lawford, R. G.; Entin, J. K.; Mohr, K. I.; Lee, C.; NASA International Water Team

    2013-05-01

    NASA's water research, applications and capacity building activities use satellites and models to contribute to regional water information and solutions for the Americas. Free and open exchange of Earth data observations and products helps engage and improve integrated observation networks and enables national and multi-national regional water cycle research and applications. NASA satellite and modeling products provide a huge volume of valuable data extending back over 50 years across a broad range of spatial (local to global) and temporal (hourly to decadal) scales and include many products that are available in near real time (see earthdata.nasa.gov). In addition, NASA's work in hydrologic predictions are valuable for: 1) short-term and hourly data that is critical for flood and landslide warnings; 2) mid-term predictions of days to weeks useful for reservoir planning and water allocation, and 3) long term seasonal to decadal forecasts helpful for agricultural and irrigation planning, land use planning, and water infrastructure development and planning. To further accomplish these objectives NASA works to actively partner with public and private groups (e.g. federal agencies, universities, NGO's, and industry) in the U.S. and internationally to ensure the broadest use of its satellites and related information and products and to collaborate with regional end users who know the regions and their needs best. Through these data, policy and partnering activities, NASA addresses numerous water issues including water scarcity, the extreme events of drought and floods, and water quality so critical to the Americas. This presentation will outline and describe NASA's water related research, applications and capacity building programs' efforts to address the Americas' critical water challenges. This will specifically include water activities in NASA's programs in Terrestrial Hydrology (e.g., land-atmosphere feedbacks and improved stream flow estimation), Water Resources

  6. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  7. MPI Runtime Error Detection with MUST: Advances in Deadlock Detection

    DOE PAGES

    Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...

    2013-01-01

    The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less

  8. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  9. Error correction in short time steps during the application of quantum gates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castro, L.A. de, E-mail: leonardo.castro@usp.br; Napolitano, R.D.J.

    2016-04-15

    We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for themore » cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.« less

  10. The estimation of soil water fluxes using lysimeter data

    NASA Astrophysics Data System (ADS)

    Wegehenkel, M.

    2009-04-01

    The validation of soil water balance models regarding soil water fluxes in the field is still a problem. This requires time series of measured model outputs. In our study, a soil water balance model was validated using lysimeter time series of measured model outputs. The soil water balance model used in our study was the Hydrus-1D-model. This model was tested by a comparison of simulated with measured daily rates of actual evapotranspiration, soil water storage, groundwater recharge and capillary rise. These rates were obtained from twelve weighable lysimeters with three different soils and two different lower boundary conditions for the time period from January 1, 1996 to December 31, 1998. In that period, grass vegetation was grown on all lysimeters. These lysimeters are located in Berlin, Germany. One potential source of error in lysimeter experiments is preferential flow caused by an artificial channeling of water due to the occurrence of air space between the soil monolith and the inside wall of the lysimeters. To analyse such sources of errors, Hydrus-1D was applied with different modelling procedures. The first procedure consists of a general uncalibrated appli-cation of Hydrus-1D. The second one includes a calibration of soil hydraulic parameters via inverse modelling of different percolation events with Hydrus-1D. In the third procedure, the model DUALP_1D was applied with the optimized hydraulic parameter set to test the hy-pothesis of the existence of preferential flow paths in the lysimeters. The results of the different modelling procedures indicated that, in addition to a precise determination of the soil water retention functions, vegetation parameters such as rooting depth should also be taken into account. Without such information, the rooting depth is a calibration parameter. However, in some cases, the uncalibrated application of both models also led to an acceptable fit between measured and simulated model outputs.

  11. Quantitation Error in 1H MRS Caused by B1 Inhomogeneity and Chemical Shift Displacement.

    PubMed

    Watanabe, Hidehiro; Takaya, Nobuhiro

    2017-11-08

    The quantitation accuracy in proton magnetic resonance spectroscopy ( 1 H MRS) improves at higher B 0 field. However, a larger chemical shift displacement (CSD) and stronger B 1 inhomogeneity exist. In this work, we evaluate the quantitation accuracy for the spectra of metabolite mixtures in phantom experiments at 4.7T. We demonstrate a position-dependent error in quantitation and propose a correction method by measuring water signals. All experiments were conducted on a whole-body 4.7T magnetic resonance (MR) system with a quadrature volume coil for transmission and reception. We arranged three bottles filled with metabolite solutions of N-acetyl aspartate (NAA) and creatine (Cr) in a vertical row inside a cylindrical phantom filled with water. Peak areas of three singlets of NAA and Cr were measured on three 1 H spectra at three volume of interests (VOIs) inside three bottles. We also measured a series of water spectra with a shifted carrier frequency and measured a reception sensitivity map. The ratios of NAA and Cr at 3.92 ppm to Cr at 3.01 ppm differed amongst the three VOIs in peak area, which leads to a position-dependent error. The nature of slope depicting the relationship between peak areas and the shifted values of frequency was like that between the reception sensitivities and displacement at every VOI. CSD and inhomogeneity of reception sensitivity cause amplitude modulation along the direction of chemical shift on the spectra, resulting in a quantitation error. This error may be more significant at higher B 0 field where CSD and B 1 inhomogeneity are more severe. This error may also occur in reception using a surface coil having inhomogeneous B 1 . Since this type of error is around a few percent, the data should be analyzed with greater attention while discussing small differences in the studies of 1 H MRS.

  12. StreamStats in North Carolina: a water-resources Web application

    USGS Publications Warehouse

    Weaver, J. Curtis; Terziotti, Silvia; Kolb, Katharine R.; Wagner, Chad R.

    2012-01-01

    A statewide StreamStats application for North Carolina was developed in cooperation with the North Carolina Department of Transportation following completion of a pilot application for the upper French Broad River basin in western North Carolina (Wagner and others, 2009). StreamStats for North Carolina, available at http://water.usgs.gov/osw/streamstats/north_carolina.html, is a Web-based Geographic Information System (GIS) application developed by the U.S. Geological Survey (USGS) in consultation with Environmental Systems Research Institute, Inc. (Esri) to provide access to an assortment of analytical tools that are useful for water-resources planning and management (Ries and others, 2008). The StreamStats application provides an accurate and consistent process that allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection sites and user-selected ungaged sites. In the North Carolina application, users can compute 47 basin characteristics and peak-flow frequency statistics (Weaver and others, 2009; Robbins and Pope, 1996) for a delineated drainage basin. Selected streamflow statistics and basin characteristics for data-collection sites have been compiled from published reports and also are immediately accessible by querying individual sites from the web interface. Examples of basin characteristics that can be computed in StreamStats include drainage area, stream slope, mean annual precipitation, and percentage of forested area (Ries and others, 2008). Examples of streamflow statistics that were previously available only through published documents include peak-flow frequency, flow-duration, and precipitation data. These data are valuable for making decisions related to bridge design, floodplain delineation, water-supply permitting, and sustainable stream quality and ecology. The StreamStats application also allows users to identify stream reaches upstream and downstream from user-selected sites

  13. 78 FR 78355 - San Diego County Water Authority; Notice of Preliminary Permit Application Accepted For Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-26

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 12747-002] San Diego County Water Authority; Notice of Preliminary Permit Application Accepted For Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On July 1, 2013, the San Diego County Water Authority filed an application for a preliminary...

  14. Error analyses of JEM/SMILES standard products on L2 operational system

    NASA Astrophysics Data System (ADS)

    Mitsuda, C.; Takahashi, C.; Suzuki, M.; Hayashi, H.; Imai, K.; Sano, T.; Takayanagi, M.; Iwata, Y.; Taniguchi, H.

    2009-12-01

    SMILES (Superconducting Submillimeter-wave Limb-Emission Sounder) , which has been developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), is planned to be launched in September, 2009 and will be on board the Japanese Experiment Module (JEM) of the International Space Station (ISS). The SMILES measures the atmospheric limb emission from stratospheric minor constituents in 640 GHz band. Target species on L2 operational system are O3, ClO, HCl, HNO3, HOCl, CH3CN, HO2, BrO, and O3 isotopes (18OOO, 17OOO and O17OO). The SMILES carries 4 K cooled Superconductor-Insulator-Superconductor mixers to carry out high-sensitivity observations. In sub-millimeter band, water vapor absorption is an important factor to decide the tropospheric and stratospheric brightness temperature. The uncertainty of water vapor absorption influences the accuracy of molecular vertical profiles. Since the SMILES bands are narrow and far from H2O lines, it is a good approximation to assume this uncertainly as linear function of frequency. We include 0th and 1st coefficients of ‘baseline’ function, not water vapor profile, in state vector and retrieve them to remove influence of the water vapor uncertainty. We performed retrieval simulations using spectra computed by L2 operatinal forward model for various H2O conditions (-/+ 5, 10% difference between true profile and a priori profile in the stratosphere and -/+ 10, 20% one in the troposphere). The results show that the incremental errors of molecules are smaller than 10% of measurements errors when height correlation of baseline coefficients and temperature are assumed to be 10 km. In conclusion, the retrieval of the baseline coefficients effectively suppresses profile error due to bias of water vapor profile.

  15. Application of nonlinear-regression methods to a ground-water flow model of the Albuquerque Basin, New Mexico

    USGS Publications Warehouse

    Tiedeman, C.R.; Kernodle, J.M.; McAda, D.P.

    1998-01-01

    This report documents the application of nonlinear-regression methods to a numerical model of ground-water flow in the Albuquerque Basin, New Mexico. In the Albuquerque Basin, ground water is the primary source for most water uses. Ground-water withdrawal has steadily increased since the 1940's, resulting in large declines in water levels in the Albuquerque area. A ground-water flow model was developed in 1994 and revised and updated in 1995 for the purpose of managing basin ground- water resources. In the work presented here, nonlinear-regression methods were applied to a modified version of the previous flow model. Goals of this work were to use regression methods to calibrate the model with each of six different configurations of the basin subsurface and to assess and compare optimal parameter estimates, model fit, and model error among the resulting calibrations. The Albuquerque Basin is one in a series of north trending structural basins within the Rio Grande Rift, a region of Cenozoic crustal extension. Mountains, uplifts, and fault zones bound the basin, and rock units within the basin include pre-Santa Fe Group deposits, Tertiary Santa Fe Group basin fill, and post-Santa Fe Group volcanics and sediments. The Santa Fe Group is greater than 14,000 feet (ft) thick in the central part of the basin. During deposition of the Santa Fe Group, crustal extension resulted in development of north trending normal faults with vertical displacements of as much as 30,000 ft. Ground-water flow in the Albuquerque Basin occurs primarily in the Santa Fe Group and post-Santa Fe Group deposits. Water flows between the ground-water system and surface-water bodies in the inner valley of the basin, where the Rio Grande, a network of interconnected canals and drains, and Cochiti Reservoir are located. Recharge to the ground-water flow system occurs as infiltration of precipitation along mountain fronts and infiltration of stream water along tributaries to the Rio Grande; subsurface

  16. Understanding virtual water flows: A multiregion input-output case study of Victoria

    NASA Astrophysics Data System (ADS)

    Lenzen, Manfred

    2009-09-01

    This article explains and interprets virtual water flows from the well-established perspective of input-output analysis. Using a case study of the Australian state of Victoria, it demonstrates that input-output analysis can enumerate virtual water flows without systematic and unknown truncation errors, an issue which has been largely absent from the virtual water literature. Whereas a simplified flow analysis from a producer perspective would portray Victoria as a net virtual water importer, enumerating the water embodiments across the full supply chain using input-output analysis shows Victoria as a significant net virtual water exporter. This study has succeeded in informing government policy in Australia, which is an encouraging sign that input-output analysis will be able to contribute much value to other national and international applications.

  17. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  18. Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting

    PubMed Central

    2017-01-01

    Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921

  19. Geographically correlated orbit error

    NASA Technical Reports Server (NTRS)

    Rosborough, G. W.

    1989-01-01

    The dominant error source in estimating the orbital position of a satellite from ground based tracking data is the modeling of the Earth's gravity field. The resulting orbit error due to gravity field model errors are predominantly long wavelength in nature. This results in an orbit error signature that is strongly correlated over distances on the size of ocean basins. Anderle and Hoskin (1977) have shown that the orbit error along a given ground track also is correlated to some degree with the orbit error along adjacent ground tracks. This cross track correlation is verified here and is found to be significant out to nearly 1000 kilometers in the case of TOPEX/POSEIDON when using the GEM-T1 gravity model. Finally, it was determined that even the orbit error at points where ascending and descending ground traces cross is somewhat correlated. The implication of these various correlations is that the orbit error due to gravity error is geographically correlated. Such correlations have direct implications when using altimetry to recover oceanographic signals.

  20. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI)

    PubMed Central

    Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  1. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  2. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  3. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  4. Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources

    NASA Astrophysics Data System (ADS)

    Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.

    2011-05-01

    The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.

  5. Errare machinale est: the use of error-related potentials in brain-machine interfaces

    PubMed Central

    Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.

    2014-01-01

    The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937

  6. 40 CFR 429.70 - Applicability; description of the wood preserving-water borne or nonpressure subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Applicability; description of the wood... PROCESSING POINT SOURCE CATEGORY Wood Preserving-Water Borne or Nonpressure Subcategory § 429.70 Applicability; description of the wood preserving-water borne or nonpressure subcategory. This subpart applies...

  7. 40 CFR 429.70 - Applicability; description of the wood preserving-water borne or nonpressure subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Applicability; description of the wood... PROCESSING POINT SOURCE CATEGORY Wood Preserving-Water Borne or Nonpressure Subcategory § 429.70 Applicability; description of the wood preserving-water borne or nonpressure subcategory. This subpart applies...

  8. 40 CFR 429.70 - Applicability; description of the wood preserving-water borne or nonpressure subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Applicability; description of the wood... PROCESSING POINT SOURCE CATEGORY Wood Preserving-Water Borne or Nonpressure Subcategory § 429.70 Applicability; description of the wood preserving-water borne or nonpressure subcategory. This subpart applies...

  9. 40 CFR 429.70 - Applicability; description of the wood preserving-water borne or nonpressure subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Applicability; description of the wood... SOURCE CATEGORY Wood Preserving-Water Borne or Nonpressure Subcategory § 429.70 Applicability; description of the wood preserving-water borne or nonpressure subcategory. This subpart applies to discharges...

  10. 40 CFR 429.70 - Applicability; description of the wood preserving-water borne or nonpressure subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Applicability; description of the wood... SOURCE CATEGORY Wood Preserving-Water Borne or Nonpressure Subcategory § 429.70 Applicability; description of the wood preserving-water borne or nonpressure subcategory. This subpart applies to discharges...

  11. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  12. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  13. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  14. Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses

    NASA Astrophysics Data System (ADS)

    Murphy, Christian E.

    2018-05-01

    Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.

  15. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  16. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  17. Impact of animal waste application on runoff water quality in field experimental plots.

    PubMed

    Hill, Dagne D; Owens, William E; Tchoounwou, Paul B

    2005-08-01

    Animal waste from dairy and poultry operations is an economical and commonly used fertilizer in the state of Louisiana. The application of animal waste to pasture lands not only is a source of fertilizer, but also allows for a convenient method of waste disposal. The disposal of animal wastes on land is a potential nonpoint source of water degradation. Water degradation and human health is a major concern when considering the disposal of large quantities of animal waste. The objective of this research was to determine the effect of animal waste application on biological (fecal coliform, Enterobacter spp. and Escherichia coli) and physical/chemical (temperature, pH, nitrate nitrogen, ammonia nitrogen, phosphate, copper, zinc, and sulfate) characteristics of runoff water in experimental plots. The effects of the application of animal waste have been evaluated by utilizing experimental plots and simulated rainfall events. Samples of runoff water were collected and analyzed for fecal coliforms. Fecal coliforms isolated from these samples were identified to the species level. Chemical analysis was performed following standard test protocols. An analysis of temperature, ammonia nitrogen, nitrate nitrogen, iron, copper, phosphate, potassium, sulfate, zinc and bacterial levels was performed following standard test protocols as presented in Standard Methods for the Examination of Water and Wastewater [1]. In the experimental plots, less time was required in the tilled broiler litter plots for the measured chemicals to decrease below the initial pre-treatment levels. A decrease of over 50% was noted between the first and second rainfall events for sulfate levels. This decrease was seen after only four simulated rainfall events in tilled broiler litter plots whereas broiler litter plots required eight simulated rainfall events to show this same type of reduction. A reverse trend was seen in the broiler litter plots and the tilled broiler plots for potassium. Bacteria numbers

  18. Nature of the refractive errors in rhesus monkeys (Macaca mulatta) with experimentally induced ametropias.

    PubMed

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L

    2010-08-23

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. Monte Carlo errors with less errors

    NASA Astrophysics Data System (ADS)

    Wolff, Ulli; Alpha Collaboration

    2004-01-01

    We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.

  20. Characterizing water surface elevation under different flow conditions for the upcoming SWOT mission

    NASA Astrophysics Data System (ADS)

    Domeneghetti, A.; Schumann, G. J.-P.; Frasson, R. P. M.; Wei, R.; Pavelsky, T. M.; Castellarin, A.; Brath, A.; Durand, M. T.

    2018-06-01

    The Surface Water and Ocean Topography satellite mission (SWOT), scheduled for launch in 2021, will deliver two-dimensional observations of water surface heights for lakes, rivers wider than 100 m and oceans. Even though the scientific literature has highlighted several fields of application for the expected products, detailed simulations of the SWOT radar performance for a realistic river scenario have not been presented in the literature. Understanding the error of the most fundamental "raw" SWOT hydrology product is important in order to have a greater awareness about strengths and limits of the forthcoming satellite observations. This study focuses on a reach (∼140 km in length) of the middle-lower portion of the Po River, in Northern Italy, and, to date, represents one of the few real-case analyses of the spatial patterns in water surface elevation accuracy expected from SWOT. The river stretch is characterized by a main channel varying from 100 to 500 m in width and a large floodplain (up to 5 km) delimited by a system of major embankments. The simulation of the water surface along the Po River for different flow conditions (high, low and mean annual flows) is performed with inputs from a quasi-2D model implemented using detailed topographic and bathymetric information (LiDAR, 2 m resolution). By employing a simulator that mimics many SWOT satellite sensor characteristics and generates proxies of the remotely sensed hydrometric data, this study characterizes the spatial observations potentially provided by SWOT. We evaluate SWOT performance under different hydraulic conditions and assess possible effects of river embankments, river width, river topography and distance from the satellite ground track. Despite analyzing errors from the raw radar pixel cloud, which receives minimal processing, the present study highlights the promising potential of this Ka-band interferometer for measuring water surface elevations, with mean elevation errors of 0.1 cm and 21

  1. 12 CFR 226.13 - Billing error resolution. 27

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... statement of a computational or similar error of an accounting nature that is made by the creditor. (6) A... least 20 days before the end of the billing cycle for which the statement was required. (b) Billing... applicable, within 2 complete billing cycles (but in no event later than 90 days) after receiving a billing...

  2. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at

  3. Round-off error in long-term orbital integrations using multistep methods

    NASA Technical Reports Server (NTRS)

    Quinlan, Gerald D.

    1994-01-01

    Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.

  4. Interaction of the Global Energy and Water Cycle Experiment (GEWEX) Water Resources Applications Project (WRAP) and Coordinated Enhanced Observing Project (CEOP) in Support of Water Resource Management and Planning

    NASA Astrophysics Data System (ADS)

    Martz, L.

    2004-05-01

    The Water Resources Applications Project (WRAP) has been developed within the Global Energy and Water Cycle Experiment (GEWEX) to facilitate the testing of GEWEX products and their transfer to operational water managers. The WRAP activity builds upon projects within the GEWEX Continental Scale Experiments (CSEs), and facilitates dialogue between these CSEs and their local water management communities regarding their information needs and opportunities for GEWEX products to meet those needs. Participating Continental Scale Experiments are located in the United States, the Mackenzie River Basin in Canada, the Amazon River Basin in Brazil, the Baltic Sea drainage area, eastern Asia and the Murray-Darling Basin in Australia. In addition, the development of WRAP is facilitating the transfer of techniques and demonstration projects to other areas through collaboration with IAHS, UNESCO/WMO HELP, WMO Hydrology and WWAP. The initiation of CEOP presents a significant new opportunity for collaborations to support the application of global hydro-climatological scientific data and techniques to water resource management. Some important scientific and operational issues identified by water resource management professionals in earlier workshops will be reviewed, some scientific initiatives needed to address these issues will be presented, and some case study examples of the application of GEWEX knowledge to water resource problems will be presented. Against this background, the unique opportunities that CEOP provides to improve our use and management of water resources globally will be discussed.

  5. Latin hypercube approach to estimate uncertainty in ground water vulnerability

    USGS Publications Warehouse

    Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.

    2007-01-01

    A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.

  6. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  7. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  8. A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data

    NASA Astrophysics Data System (ADS)

    Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana

    2016-09-01

    A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water

  9. Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.

    2006-01-01

    Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.

  10. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  11. A new method of calculating electrical conductivity with applications to natural waters

    NASA Astrophysics Data System (ADS)

    McCleskey, R. Blaine; Nordstrom, D. Kirk; Ryan, Joseph N.; Ball, James W.

    2012-01-01

    A new method is presented for calculating the electrical conductivity of natural waters that is accurate over a large range of effective ionic strength (0.0004-0.7 mol kg-1), temperature (0-95 °C), pH (1-10), and conductivity (30-70,000 μS cm-1). The method incorporates a reliable set of equations to calculate the ionic molal conductivities of cations and anions (H+, Li+, Na+, K+, Cs+, NH4+, Mg2+, Ca2+, Sr2+, Ba2+, F-, Cl-, Br-, SO42-, HCO3-, CO32-, NO3-, and OH-), environmentally important trace metals (Al3+, Cu2+, Fe2+, Fe3+, Mn2+, and Zn2+), and ion pairs (HSO4-, NaSO4-, NaCO3-, and KSO4-). These equations are based on new electrical conductivity measurements for electrolytes found in a wide range of natural waters. In addition, the method is coupled to a geochemical speciation model that is used to calculate the speciated concentrations required for accurate conductivity calculations. The method was thoroughly tested by calculating the conductivities of 1593 natural water samples and the mean difference between the calculated and measured conductivities was -0.7 ± 5%. Many of the samples tested were selected to determine the limits of the method and include acid mine waters, geothermal waters, seawater, dilute mountain waters, and river water impacted by municipal waste water. Transport numbers were calculated and H+, Na+, Ca2+, Mg2+, NH4+, K+, Cl-, SO42-, HCO3-, CO32-, F-, Al3+, Fe2+, NO3-, and HSO4-substantially contributed (>10%) to the conductivity of at least one of the samples. Conductivity imbalance in conjunction with charge imbalance can be used to identify whether a cation or an anion measurement is likely in error, thereby providing an additional quality assurance/quality control constraint on water analyses.

  12. Combating omission errors through task analysis and good reminders.

    PubMed

    Reason, J

    2002-03-01

    Leaving out necessary task steps is the single most common human error type. Certain task steps possess characteristics that are more likely to provoke omissions than others, and can be identified in advance. The paper reports two studies. The first, involving a simple photocopier, established that failing to remove the last page of the original is the commonest omission. This step possesses four distinct error-provoking features that combine their effects in an additive fashion. The second study examined the degree to which everyday memory aids satisfy five features of a good reminder: conspicuity, contiguity, content, context, and countability. A close correspondence was found between the percentage use of strategies and the degree to which they satisfied these five criteria. A three stage omission management programme was outlined: task analysis (identifying discrete task steps) of some safety critical activity; assessing the omission likelihood of each step; and the choice and application of a suitable reminder. Such a programme is applicable to a variety of healthcare procedures.

  13. Applications of thermal remote sensing to detailed ground water studies

    NASA Technical Reports Server (NTRS)

    Souto-Maior, J.

    1973-01-01

    Three possible applications of thermal (8-14 microns) remote sensing to detailed hydrogeologic studies are discussed in this paper: (1) the direct detection of seeps and springs, (2) the indirect evaluation of shallow ground water flow through its thermal effects on the land surface, and (3) the indirect location of small volumes of ground water inflow into surface water bodies. An investigation carried out with this purpose in an area containing a complex shallow ground water flow system indicates that the interpretation of the thermal imageries is complicated by many factors, among which the most important are: (1) altitude, angle of view, and thermal-spatial resolution of the sensor; (2) vegetation type, density, and vigor; (3) topography; (4) climatological and micrometeorological effects; (5) variation in soil type and soil moisture; (6) variation in volume and temperature of ground water inflow; (7) the hydraulic characteristics of the receiving water body, and (8) the presence of decaying organic material.

  14. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  15. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  16. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  17. Investigation of technology needs for avoiding helicopter pilot error related accidents

    NASA Technical Reports Server (NTRS)

    Chais, R. I.; Simpson, W. E.

    1985-01-01

    Pilot error which is cited as a cause or related factor in most rotorcraft accidents was examined. Pilot error related accidents in helicopters to identify areas in which new technology could reduce or eliminate the underlying causes of these human errors were investigated. The aircraft accident data base at the U.S. Army Safety Center was studied as the source of data on helicopter accidents. A randomly selected sample of 110 aircraft records were analyzed on a case-by-case basis to assess the nature of problems which need to be resolved and applicable technology implications. Six technology areas in which there appears to be a need for new or increased emphasis are identified.

  18. Programmatic Perspectives on Using `Rapid Prototyping Capability' for Water Management Applications Using NASA Products

    NASA Astrophysics Data System (ADS)

    Toll, D.; Friedl, L.; Entin, J.; Engman, E.

    2006-12-01

    The NASA Water Management Program addresses concerns and decision making related to water availability, water forecast and water quality. The goal of the Water Management Program Element is to encourage water management organizations to use NASA Earth science data, models products, technology and other capabilities in their decision support tools (DSTs) for problem solving. The goal of the NASA Rapid Prototyping Capability (RPC) is to speed the evaluation of these NASA products and technologies to improve current and future DSTs by reducing the time to access, configure, and assess the effectiveness of NASA products and technologies. The NASA Water Management Program Element partners with Federal agencies, academia, private firms, and may include international organizations. Currently, the NASA Water Management Program oversees eight application projects. However, water management is a very broad descriptor of a much larger number of activities that are carried out to insure safe and plentiful water supply for humans, industry and agriculture, promote environmental stewardship, and mitigate disaster such as floods and droughts. The goal of this presentation is to summarize how the RPC may further enhance the effectiveness of using NASA products for water management applications.

  19. Suspended sediment fluxes in a tidal wetland: Measurement, controlling factors, and error analysis

    USGS Publications Warehouse

    Ganju, N.K.; Schoellhamer, D.H.; Bergamaschi, B.A.

    2005-01-01

    Suspended sediment fluxes to and from tidal wetlands are of increasing concern because of habitat restoration efforts, wetland sustainability as sea level rises, and potential contaminant accumulation. We measured water and sediment fluxes through two channels on Browns Island, at the landward end of San Francisco Bay, United States, to determine the factors that control sediment fluxes on and off the island. In situ instrumentation was deployed between October 10 and November 13, 2003. Acoustic Doppler current profilers and the index velocity method were employed to calculate water fluxes. Suspended sediment concentrations (SSC) were determined with optical sensors and cross-sectional water sampling. All procedures were analyzed for their contribution to total error in the flux measurement. The inability to close the water balance and determination of constituent concentration were identified as the main sources of error; total error was 27% for net sediment flux. The water budget for the island was computed with an unaccounted input of 0.20 m 3 s-1 (22% of mean inflow), after considering channel flow, change in water storage, evapotranspiration, and precipitation. The net imbalance may be a combination of groundwater seepage, overland flow, and flow through minor channels. Change of island water storage, caused by local variations in water surface elevation, dominated the tidalty averaged water flux. These variations were mainly caused by wind and barometric pressure change, which alter regional water levels throughout the Sacramento-San Joaquin River Delta. Peak instantaneous ebb flow was 35% greater than peak flood flow, indicating an ebb-dominant system, though dominance varied with the spring-neap cycle. SSC were controlled by wind-wave resuspension adjacent to the island and local tidal currents that mobilized sediment from the channel bed. During neap tides sediment was imported onto the island but during spring tides sediment was exported because the main

  20. Air temperature sensors: dependence of radiative errors on sensor diameter in precision metrology and meteorology

    NASA Astrophysics Data System (ADS)

    de Podesta, Michael; Bell, Stephanie; Underwood, Robin

    2018-04-01

    In both meteorological and metrological applications, it is well known that air temperature sensors are susceptible to radiative errors. However, it is not widely known that the radiative error measured by an air temperature sensor in flowing air depends upon the sensor diameter, with smaller sensors reporting values closer to true air temperature. This is not a transient effect related to sensor heat capacity, but a fluid-dynamical effect arising from heat and mass flow in cylindrical geometries. This result has been known historically and is in meteorology text books. However, its significance does not appear to be widely appreciated and, as a consequence, air temperature can be—and probably is being—widely mis-estimated. In this paper, we first review prior descriptions of the ‘sensor size’ effect from the metrological and meteorological literature. We develop a heat transfer model to describe the process for cylindrical sensors, and evaluate the predicted temperature error for a range of sensor sizes and air speeds. We compare these predictions with published predictions and measurements. We report measurements demonstrating this effect in two laboratories at NPL in which the air flow and temperature are exceptionally closely controlled. The results are consistent with the heat-transfer model, and show that the air temperature error is proportional to the square root of the sensor diameter and that, even under good laboratory conditions, it can exceed 0.1 °C for a 6 mm diameter sensor. We then consider the implications of this result. In metrological applications, errors of the order of 0.1 °C are significant, representing limiting uncertainties in dimensional and mass measurements. In meteorological applications, radiative errors can easily be much larger. But in both cases, an understanding of the diameter dependence allows assessment and correction of the radiative error using a multi-sensor technique.

  1. Application and Prospect of Big Data in Water Resources

    NASA Astrophysics Data System (ADS)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  2. Refractive errors.

    PubMed

    Schiefer, Ulrich; Kraus, Christina; Baumbach, Peter; Ungewiß, Judith; Michels, Ralf

    2016-10-14

    All over the world, refractive errors are among the most frequently occuring treatable distur - bances of visual function. Ametropias have a prevalence of nearly 70% among adults in Germany and are thus of great epidemiologic and socio-economic relevance. In the light of their own clinical experience, the authors review pertinent articles retrieved by a selective literature search employing the terms "ametropia, "anisometropia," "refraction," "visual acuity," and epidemiology." In 2011, only 31% of persons over age 16 in Germany did not use any kind of visual aid; 63.4% wore eyeglasses and 5.3% wore contact lenses. Refractive errors were the most common reason for consulting an ophthalmologist, accounting for 21.1% of all outpatient visits. A pinhole aperture (stenopeic slit) is a suitable instrument for the basic diagnostic evaluation of impaired visual function due to optical factors. Spherical refractive errors (myopia and hyperopia), cylindrical refractive errors (astigmatism), unequal refractive errors in the two eyes (anisometropia), and the typical optical disturbance of old age (presbyopia) cause specific functional limitations and can be detected by a physician who does not need to be an ophthalmologist. Simple functional tests can be used in everyday clinical practice to determine quickly, easily, and safely whether the patient is suffering from a benign and easily correctable type of visual impairment, or whether there are other, more serious underlying causes.

  3. The effects of center of rotation errors on cardiac SPECT imaging

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.

    2003-10-01

    In SPECT imaging, center of rotation (COR) errors lead to the misalignment of projection data and can potentially degrade the quality of the reconstructed images. In this work, we study the effects of COR errors on cardiac SPECT imaging using simulation, point source, cardiac phantom, and patient studies. For simulation studies, we generate projection data using a uniform MCAT phantom first without modeling any physical effects (NPH), then with the modeling of detector response effect (DR) alone. We then corrupt the projection data with simulated sinusoid and step COR errors. For other studies, we introduce sinusoid COR errors to projection data acquired on SPECT systems. An OSEM algorithm is used for image reconstruction without detector response correction, but with nonuniform attenuation correction when needed. The simulation studies show that, when COR errors increase from 0 to 0.96 cm: 1) sinusoid COR errors in axial direction lead to intensity decrease in the inferoapical region; 2) step COR errors in axial direction lead to intensity decrease in the distal anterior region. The intensity decrease is more severe in images reconstructed from projection data with NPH than with DR; and 3) the effects of COR errors in transaxial direction seem to be insignificant. In other studies, COR errors slightly degrade point source resolution; COR errors of 0.64 cm or above introduce visible but insignificant nonuniformity in the images of uniform cardiac phantom; COR errors up to 0.96 cm in transaxial direction affect the lesion-to-background contrast (LBC) insignificantly in the images of cardiac phantom with defects, and COR errors up to 0.64 cm in axial direction only slightly decrease the LBC. For the patient studies with COR errors up to 0.96 cm, images have the same diagnostic/prognostic values as those without COR errors. This work suggests that COR errors of up to 0.64 cm are not likely to change the clinical applications of cardiac SPECT imaging when using

  4. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  5. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  6. Datasets related to in-land water for limnology and remote sensing applications: distance-to-land, distance-to-water, water-body identifier and lake-centre co-ordinates.

    PubMed

    Carrea, Laura; Embury, Owen; Merchant, Christopher J

    2015-11-01

    Datasets containing information to locate and identify water bodies have been generated from data locating static-water-bodies with resolution of about 300 m (1/360 ∘ ) recently released by the Land Cover Climate Change Initiative (LC CCI) of the European Space Agency. The LC CCI water-bodies dataset has been obtained from multi-temporal metrics based on time series of the backscattered intensity recorded by ASAR on Envisat between 2005 and 2010. The new derived datasets provide coherently: distance to land, distance to water, water-body identifiers and lake-centre locations. The water-body identifier dataset locates the water bodies assigning the identifiers of the Global Lakes and Wetlands Database (GLWD), and lake centres are defined for in-land waters for which GLWD IDs were determined. The new datasets therefore link recent lake/reservoir/wetlands extent to the GLWD, together with a set of coordinates which locates unambiguously the water bodies in the database. Information on distance-to-land for each water cell and the distance-to-water for each land cell has many potential applications in remote sensing, where the applicability of geophysical retrieval algorithms may be affected by the presence of water or land within a satellite field of view (image pixel). During the generation and validation of the datasets some limitations of the GLWD database and of the LC CCI water-bodies mask have been found. Some examples of the inaccuracies/limitations are presented and discussed. Temporal change in water-body extent is common. Future versions of the LC CCI dataset are planned to represent temporal variation, and this will permit these derived datasets to be updated.

  7. Effects of ozone and ozone/peroxide on trace organic contaminants and NDMA in drinking water and water reuse applications.

    PubMed

    Pisarenko, Aleksey N; Stanford, Benjamin D; Yan, Dongxu; Gerrity, Daniel; Snyder, Shane A

    2012-02-01

    An ozone and ozone/peroxide oxidation process was evaluated at pilot scale for trace organic contaminant (TOrC) mitigation and NDMA formation in both drinking water and water reuse applications. A reverse osmosis (RO) pilot was also evaluated as part of the water reuse treatment train. Ozone/peroxide showed lower electrical energy per order of removal (EEO) values for TOrCs in surface water treatment, but the addition of hydrogen peroxide increased EEO values during wastewater treatment. TOrC oxidation was correlated to changes in UV(254) absorbance and fluorescence offering a surrogate model for predicting contaminant removal. A decrease in N-nitrosodimethylamine (NDMA) formation potential (after chloramination) was observed after treatment with ozone and ozone/peroxide. However, during spiking experiments with surface water, ozone/peroxide achieved limited destruction of NDMA, while in wastewaters net direct formation of NDMA of 6-33 ng/L was observed after either ozone or ozone/peroxide treatment. Once formed during ozonation, NDMA passed through the subsequent RO membranes, which highlights the significance of the potential for direct NDMA formation during oxidation in reuse applications. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. [Epidemiology of refractive errors].

    PubMed

    Wolfram, C

    2017-07-01

    Refractive errors are very common and can lead to severe pathological changes in the eye. This article analyzes the epidemiology of refractive errors in the general population in Germany and worldwide and describes common definitions for refractive errors and clinical characteristics for pathologicaal changes. Refractive errors differ between age groups due to refractive changes during the life time and also due to generation-specific factors. Current research about the etiology of refractive errors has strengthened the influence of environmental factors, which led to new strategies for the prevention of refractive pathologies.

  9. 76 FR 30936 - West Maui Pumped Storage Water Supply, LLC; Notice of Preliminary Permit Application Accepted for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-27

    ... Storage Water Supply, LLC; Notice of Preliminary Permit Application Accepted for Filing and Soliciting...-acre reservoir; (4) a turnout to supply project effluent water to an existing irrigation system; (5) a...,000 megawatt-hours. Applicant Contact: Bart M. O'Keeffe, West Maui Pumped Storage Water Supply, LLC, P...

  10. Mass imbalances in EPANET water-quality simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    EPANET is widely employed to simulate water quality in water distribution systems. However, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results, in general, only for small water-quality time steps; use of an adequately short time step may not be feasible. Overly long time steps can yield errors in concentrations and result in situations in which constituent mass is not conserved. Mass may not be conserved even when EPANET gives no errors or warnings. This paper explains how such imbalances can occur and provides examples of such cases; it also presents a preliminary event-driven approachmore » that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, to those obtained using the new approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations.« less

  11. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0error functionals, in a flexible and computationally efficient framework. In this paper, we develop a theory and basic universal data approximation algorithms (k-means, principal components, principal manifolds and graphs, regularized and sparse regression), based on piece-wise quadratic error potentials of subquadratic growth (PQSQ potentials). We develop a new and universal framework to minimize arbitrary sub-quadratic error potentials using an algorithm with guaranteed fast convergence to the local or global error minimum. The theory of PQSQ potentials is based on the notion of the cone of minorant functions, and represents a natural approximation formalism based on the application of min-plus algebra. The approach can be applied in most of existing machine learning methods, including methods of data approximation and regularized and sparse regression, leading to the improvement in the computational cost/accuracy trade-off. We demonstrate that on synthetic and real-life datasets PQSQ-based machine learning methods achieve orders of magnitude faster computational performance than the corresponding state-of-the-art methods, having similar or better approximation accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Evaluation of coastal zone color scanner diffuse attenuation coefficient algorithms for application to coastal waters

    NASA Astrophysics Data System (ADS)

    Mueller, James L.; Trees, Charles C.; Arnone, Robert A.

    1990-09-01

    The Coastal Zone Color Scannez (ZCS) and associated atmospheric and in-water algorithms have allowed synoptic analyses of regional and large scale variability of bio-optical properties [phytoplankton pigments and diffuse auenuation coefficient K(490)}. Austin and Petzold (1981) developed a robust in-water K(490) algorithm which related the diffuse attenuation coefficient at one optical depth [1/K(490)] to the ratio of the water-leaving radiances at 443 and 550 nm. Their regression analysis included diffuse attenuation coefficients K(490) up to 0.40 nm, but excluded data from estuarine areas, and other Case II waters, where the optical properties are not predominantly determined by phytoplankton. In these areas, errors are induced in the retrieval of remote sensing K(490) by extremely low water-leaving radiance at 443 nm [Lw(443) as viewed at the sensor may only be 1 or 2 digital counts], and improved cury can be realized using algorithms based on wavelengths where Lw(λ) is larger. Using ocean optical profiles quired by the Visibility Laboratory, algorithms are developed to predict K(490) from ratios of water leaving radiances at 520 and 670, as well as 443 and 550 nm.

  13. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    PubMed

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  14. Modeling vadose zone processes during land application of food-processing waste water in California's Central Valley.

    PubMed

    Miller, Gretchen R; Rubin, Yoram; Mayer, K Ulrich; Benito, Pascual H

    2008-01-01

    Land application of food-processing waste water occurs throughout California's Central Valley and may be degrading local ground water quality, primarily by increasing salinity and nitrogen levels. Natural attenuation is considered a treatment strategy for the waste, which often contains elevated levels of easily degradable organic carbon. Several key biogeochemical processes in the vadose zone alter the characteristics of the waste water before it reaches the ground water table, including microbial degradation, crop nutrient uptake, mineral precipitation, and ion exchange. This study used a process-based, multi-component reactive flow and transport model (MIN3P) to numerically simulate waste water migration in the vadose zone and to estimate its attenuation capacity. To address the high variability in site conditions and waste-stream characteristics, four food-processing industries were coupled with three site scenarios to simulate a range of land application outcomes. The simulations estimated that typically between 30 and 150% of the salt loading to the land surface reaches the ground water, resulting in dissolved solids concentrations up to sixteen times larger than the 500 mg L(-1) water quality objective. Site conditions, namely the ratio of hydraulic conductivity to the application rate, strongly influenced the amount of nitrate reaching the ground water, which ranged from zero to nine times the total loading applied. Rock-water interaction and nitrification explain salt and nitrate concentrations that exceed the levels present in the waste water. While source control remains the only method to prevent ground water degradation from saline wastes, proper site selection and waste application methods can reduce the risk of ground water degradation from nitrogen compounds.

  15. The Water Availability Tool for Environmental Resources (WATER): A Water-Budget Modeling Approach for Managing Water-Supply Resources in Kentucky - Phase I: Data Processing, Model Development, and Application to Non-Karst Areas

    USGS Publications Warehouse

    Williamson, Tanja N.; Odom, Kenneth R.; Newson, Jeremy K.; Downs, Aimee C.; Nelson, Hugh L.; Cinotto, Peter J.; Ayers, Mark A.

    2009-01-01

    The Water Availability Tool for Environmental Resources (WATER) was developed in cooperation with the Kentucky Division of Water to provide a consistent and defensible method of estimating streamflow and water availability in ungaged basins. WATER is process oriented; it is based on the TOPMODEL code and incorporates historical water-use data together with physiographic data that quantitatively describe topography and soil-water storage. The result is a user-friendly decision tool that can estimate water availability in non-karst areas of Kentucky without additional data or processing. The model runs on a daily time step, and critical source data include a historical record of daily temperature and precipitation, digital elevation models (DEMs), the Soil Survey Geographic Database (SSURGO), and historical records of water discharges and withdrawals. The model was calibrated and statistically evaluated for 12 basins by comparing the estimated discharge to that observed at U.S. Geological Survey streamflow-gaging stations. When statistically evaluated over a 2,119-day time period, the discharge estimates showed a bias of -0.29 to 0.42, a root mean square error of 1.66 to 5.06, a correlation of 0.54 to 0.85, and a Nash-Sutcliffe Efficiency of 0.26 to 0.72. The parameter and input modifications that most significantly improved the accuracy and precision of streamflow-discharge estimates were the addition of Next Generation radar (NEXRAD) precipitation data, a rooting depth of 30 centimeters, and a TOPMODEL scaling parameter (m) derived directly from SSURGO data that was multiplied by an adjustment factor of 0.10. No site-specific optimization was used.

  16. Calculating Water Thermodynamics in the Binding Site of Proteins - Applications of WaterMap to Drug Discovery.

    PubMed

    Cappel, Daniel; Sherman, Woody; Beuming, Thijs

    2017-01-01

    The ability to accurately characterize the solvation properties (water locations and thermodynamics) of biomolecules is of great importance to drug discovery. While crystallography, NMR, and other experimental techniques can assist in determining the structure of water networks in proteins and protein-ligand complexes, most water molecules are not fully resolved and accurately placed. Furthermore, understanding the energetic effects of solvation and desolvation on binding requires an analysis of the thermodynamic properties of solvent involved in the interaction between ligands and proteins. WaterMap is a molecular dynamics-based computational method that uses statistical mechanics to describe the thermodynamic properties (entropy, enthalpy, and free energy) of water molecules at the surface of proteins. This method can be used to assess the solvent contributions to ligand binding affinity and to guide lead optimization. In this review, we provide a comprehensive summary of published uses of WaterMap, including applications to lead optimization, virtual screening, selectivity analysis, ligand pose prediction, and druggability assessment. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  18. SURFACE APPLICATION SYSTEM FOR IN SITU GROUND-WATER BIOREMEDIATION: SITE CHARACTERIZATION AND MODELING

    EPA Science Inventory

    Surface application offers an inexpensive, noninvasive alternative to injection wells and infiltration galleries for in situ ground-water bioreniediation applications. The technology employs artificial recharge to create favorable hydraulic conditions for mixing and vertical tran...

  19. Renewable Energy in Water and Wastewater Treatment Applications; Period of Performance: April 1, 2001--September 1, 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Argaw, N.

    2003-06-01

    This guidebook will help readers understand where and how renewable energy technologies can be used for water and wastewater treatment applications. It is specifically designed for rural and small urban center water supply and wastewater treatment applications. This guidebook also provides basic information for selecting water resources and for various kinds of commercially available water supply and wastewater treatment technologies and power sources currently in the market.

  20. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1987-01-01

    The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.