Sample records for water application errors

  1. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  2. FIBER AND INTEGRATED OPTICS, LASER APPLICATIONS, AND OTHER PROBLEMS IN QUANTUM ELECTRONICS: Raman scattering spectra recorded in the course of the water-ice phase transition and laser diagnostics of heterophase water systems

    NASA Astrophysics Data System (ADS)

    Glushkov, S. M.; Panchishin, I. M.; Fadeev, V. V.

    1989-04-01

    The method of laser Raman spectroscopy was used to study heterophase water systems. The apparatus included an argon laser, an optical multichannel analyzer, and a microcomputer. The temperature dependences of the profiles of the valence (stretching) band in the Raman spectrum of liquid water between + 50 °C and - 7 °C and of polycrystalline ice Ih (from 0 to - 62 °C) were determined, as well as the spectral polarization characteristics of the Raman valence band. A method was developed for the determination of the partial concentrations of the H2O molecules in liquid and solid phases present as a mixture. An analysis was made of the errors of the method and the sources of these errors. Applications of the method to multiparameter problems in more complex water systems (for example, solutions of potassium iodide in water) were considered. Other potential practical applications of the method were discussed.

  3. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    NASA Astrophysics Data System (ADS)

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  4. Applicability of Kinematic and Diffusive models for mud-flows: a steady state analysis

    NASA Astrophysics Data System (ADS)

    Di Cristo, Cristiana; Iervolino, Michele; Vacca, Andrea

    2018-04-01

    The paper investigates the applicability of Kinematic and Diffusive Wave models for mud-flows with a power-law shear-thinning rheology. In analogy with a well-known approach for turbulent clear-water flows, the study compares the steady flow depth profiles predicted by approximated models with those of the Full Dynamic Wave one. For all the models and assuming an infinitely wide channel, the analytical solution of the flow depth profiles, in terms of hypergeometric functions, is derived. The accuracy of the approximated models is assessed by computing the average, along the channel length, of the errors, for several values of the Froude and kinematic wave numbers. Assuming the threshold value of the error equal to 5%, the applicability conditions of the two approximations have been individuated for several values of the power-law exponent, showing a crucial role of the rheology. The comparison with the clear-water results indicates that applicability criteria for clear-water flows do not apply to shear-thinning fluids, potentially leading to an incorrect use of approximated models if the rheology is not properly accounted for.

  5. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1998-01-01

    We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.

  6. Benefit transfer and spatial heterogeneity of preferences for water quality improvements.

    PubMed

    Martin-Ortega, J; Brouwer, R; Ojea, E; Berbel, J

    2012-09-15

    The improvement in the water quality resulting from the implementation of the EU Water Framework Directive is expected to generate substantial non-market benefits. A wide spread estimation of these benefits across Europe will require the application of benefit transfer. We use a spatially explicit valuation design to account for the spatial heterogeneity of preferences to help generate lower transfer errors. A map-based choice experiment is applied in the Guadalquivir River Basin (Spain), accounting simultaneously for the spatial distribution of water quality improvements and beneficiaries. Our results show that accounting for the spatial heterogeneity of preferences generally produces lower transfer errors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Predictive accuracy of a ground-water model--Lessons from a postaudit

    USGS Publications Warehouse

    Konikow, Leonard F.

    1986-01-01

    Hydrogeologic studies commonly include the development, calibration, and application of a deterministic simulation model. To help assess the value of using such models to make predictions, a postaudit was conducted on a previously studied area in the Salt River and lower Santa Cruz River basins in central Arizona. A deterministic, distributed-parameter model of the ground-water system in these alluvial basins was calibrated by Anderson (1968) using about 40 years of data (1923–64). The calibrated model was then used to predict future water-level changes during the next 10 years (1965–74). Examination of actual water-level changes in 77 wells from 1965–74 indicates a poor correlation between observed and predicted water-level changes. The differences have a mean of 73 ft that is, predicted declines consistently exceeded those observed and a standard deviation of 47 ft. The bias in the predicted water-level change can be accounted for by the large error in the assumed total pumpage during the prediction period. However, the spatial distribution of errors in predicted water-level change does not correlate with the spatial distribution of errors in pumpage. Consequently, the lack of precision probably is not related only to errors in assumed pumpage, but may indicate the presence of other sources of error in the model, such as the two-dimensional representation of a three-dimensional problem or the lack of consideration of land-subsidence processes. This type of postaudit is a valuable method of verifying a model, and an evaluation of predictive errors can provide an increased understanding of the system and aid in assessing the value of undertaking development of a revised model.

  8. Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems

    NASA Astrophysics Data System (ADS)

    Stewart, Michael K.; Morgenstern, Uwe; Gusyev, Maksym A.; Małoszewski, Piotr

    2017-09-01

    Kirchner (2016a) demonstrated that aggregation errors due to spatial heterogeneity, represented by two homogeneous subcatchments, could cause severe underestimation of the mean transit times (MTTs) of water travelling through catchments when simple lumped parameter models were applied to interpret seasonal tracer cycle data. Here we examine the effects of such errors on the MTTs and young water fractions estimated using tritium concentrations in two-part hydrological systems. We find that MTTs derived from tritium concentrations in streamflow are just as susceptible to aggregation bias as those from seasonal tracer cycles. Likewise, groundwater wells or springs fed by two or more water sources with different MTTs will also have aggregation bias. However, the transit times over which the biases are manifested are different because the two methods are applicable over different time ranges, up to 5 years for seasonal tracer cycles and up to 200 years for tritium concentrations. Our virtual experiments with two water components show that the aggregation errors are larger when the MTT differences between the components are larger and the amounts of the components are each close to 50 % of the mixture. We also find that young water fractions derived from tritium (based on a young water threshold of 18 years) are almost immune to aggregation errors as were those derived from seasonal tracer cycles with a threshold of about 2 months.

  9. Comment on 'Shang S. 2012. Calculating actual crop evapotranspiration under soil water stress conditions with appropriate numerical methods and time step. Hydrological Processes 26: 3338-3343. DOI: 10.1002/hyp.8405'

    NASA Technical Reports Server (NTRS)

    Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James

    2014-01-01

    A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.

  10. Comparison of the resulting error in data fusion techniques when used with remote sensing, earth observation, and in-situ data sets for water quality applications

    NASA Astrophysics Data System (ADS)

    Ziemba, Alexander; El Serafy, Ghada

    2016-04-01

    Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.

  11. Documentation of a spreadsheet for time-series analysis and drawdown estimation

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.

  12. A framework for human-hydrologic system model development integrating hydrology and water management: application to the Cutzamala water system in Mexico

    NASA Astrophysics Data System (ADS)

    Wi, S.; Freeman, S.; Brown, C.

    2017-12-01

    This study presents a general approach to developing computational models of human-hydrologic systems where human modification of hydrologic surface processes are significant or dominant. A river basin system is represented by a network of human-hydrologic response units (HHRUs) identified based on locations where river regulations happen (e.g., reservoir operation and diversions). Natural and human processes in HHRUs are simulated in a holistic framework that integrates component models representing rainfall-runoff, river routing, reservoir operation, flow diversion and water use processes. We illustrate the approach in a case study of the Cutzamala water system (CWS) in Mexico, a complex inter-basin water transfer system supplying the Mexico City Metropolitan Area (MCMA). The human-hydrologic system model for CWS (CUTZSIM) is evaluated in terms of streamflow and reservoir storages measured across the CWS and to water supplied for MCMA. The CUTZSIM improves the representation of hydrology and river-operation interaction and, in so doing, advances evaluation of system-wide water management consequences under altered climatic and demand regimes. The integrated modeling framework enables evaluation and simulation of model errors throughout the river basin, including errors in representation of the human component processes. Heretofore, model error evaluation, predictive error intervals and the resultant improved understanding have been limited to hydrologic processes. The general framework represents an initial step towards fuller understanding and prediction of the many and varied processes that determine the hydrologic fluxes and state variables in real river basins.

  13. Theoretical Calculation and Validation of the Water Vapor Continuum Absorption

    NASA Technical Reports Server (NTRS)

    Ma, Qiancheng; Tipping, Richard H.

    1998-01-01

    The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multispectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/sq m, which compared to the 4 W/sq m magnitude of the greenhouse gas forcing and the 1-2 W/sq m estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning, the empirical continuum models over the full spectral range of interest for remote sensing and climate applications. Thus, we propose to further develop and refine our existing, far-wing formalism to provide an improved treatment applicable from the near-infrared through the microwave. Based on the results of this investigation, we will provide to the remote sensing/climate modeling community a practical and accurate tabulation of the continuum absorption covering the near-infrared through the microwave region of the spectrum for the range of temperatures and pressures of interest for atmospheric applications.

  14. Theoretical Calculation and Validation of the Water Vapor Continuum Absorption

    NASA Technical Reports Server (NTRS)

    Ma, Qiancheng; Tipping, Richard H.

    1998-01-01

    The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multi-spectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/ml, which compared to the 4 W/m' magnitude of the greenhouse gas forcing and the 1-2 W/m' estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning the empirical continuum models over the full spectral range of interest for remote sensing and climate applications. Thus, we propose to further develop and refine our existing far-wing formalism to provide an improved treatment applicable from the near-infrared through the microwave. Based on the results of this investigation, we will provide to the remote sensing/climate modeling community a practical and accurate tabulation of the continuum absorption covering the near-infrared through the microwave region of the spectrum for the range of temperatures and pressures of interest for atmospheric applications.

  15. Determining particle size and water content by near-infrared spectroscopy in the granulation of naproxen sodium.

    PubMed

    Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter

    2018-03-20

    Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Automated River Reach Definition Strategies: Applications for the Surface Water and Ocean Topography Mission

    NASA Astrophysics Data System (ADS)

    Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André

    2017-10-01

    The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.

  17. Methodology of functionality selection for water management software and examples of its application.

    PubMed

    Vasilyev, K N

    2013-01-01

    When developing new software products and adapting existing software, project leaders have to decide which functionalities to keep, adapt or develop. They have to consider that the cost of making errors during the specification phase is extremely high. In this paper a formalised approach is proposed that considers the main criteria for selecting new software functions. The application of this approach minimises the chances of making errors in selecting the functions to apply. Based on the work on software development and support projects in the area of water resources and flood damage evaluation in economic terms at CH2M HILL (the developers of the flood modelling package ISIS), the author has defined seven criteria for selecting functions to be included in a software product. The approach is based on the evaluation of the relative significance of the functions to be included into the software product. Evaluation is achieved by considering each criterion and the weighting coefficients of each criterion in turn and applying the method of normalisation. This paper includes a description of this new approach and examples of its application in the development of new software products in the are of the water resources management.

  18. Application of Difference-in-Difference Techniques to the Evaluation of Drought-Tainted Water Conservation Programs.

    ERIC Educational Resources Information Center

    Bamezai, Anil

    1995-01-01

    Some of the threats to internal validity that arise when evaluating the impact of water conservation programs during a drought are illustrated. These include differential response to the drought, self-selection bias, and measurement error. How to deal with these problems when high-quality disaggregate data are available is discussed. (SLD)

  19. Quantification of sewer system infiltration using delta(18)O hydrograph separation.

    PubMed

    Prigiobbe, V; Giulianelli, M

    2009-01-01

    The infiltration of parasitical water into two sewer systems in Rome (Italy) was quantified during a dry weather period. Infiltration was estimated using the hydrograph separation method with two water components and delta(18)O as a conservative tracer. The two water components were groundwater, the possible source of parasitical water within the sewer, and drinking water discharged into the sewer system. This method was applied at an urban catchment scale in order to test the effective water-tightness of two different sewer networks. The sampling strategy was based on an uncertainty analysis and the errors have been propagated using Monte Carlo random sampling. Our field applications showed that the method can be applied easily and quickly, but the error in the estimated infiltration rate can be up to 20%. The estimated infiltration into the recent sewer in Torraccia is 14% and can be considered negligible given the precision of the method, while the old sewer in Infernetto has an estimated infiltration of 50%.

  20. MODFLOW-2000, the U.S. Geological Survey modular ground-water model -- Documentation of MOD-PREDICT for predictions, prediction sensitivity analysis, and evaluation of uncertainty

    USGS Publications Warehouse

    Tonkin, M.J.; Hill, Mary C.; Doherty, John

    2003-01-01

    This document describes the MOD-PREDICT program, which helps evaluate userdefined sets of observations, prior information, and predictions, using the ground-water model MODFLOW-2000. MOD-PREDICT takes advantage of the existing Observation and Sensitivity Processes (Hill and others, 2000) by initiating runs of MODFLOW-2000 and using the output files produced. The names and formats of the MODFLOW-2000 input files are unchanged, such that full backward compatibility is maintained. A new name file and input files are required for MOD-PREDICT. The performance of MOD-PREDICT has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program using the email address available at the web address below. Updates might occasionally be made to this document, to the MOD-PREDICT program, and to MODFLOW- 2000. Users can check for updates on the Internet at URL http://water.usgs.gov/software/ground water.html/.

  1. Waterbodies Extraction from LANDSAT8-OLI Imagery Using Awater Indexs-Guied Stochastic Fully-Connected Conditional Random Field Model and the Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, X.; Xu, L.

    2018-04-01

    One of the most important applications of remote sensing classification is water extraction. The water index (WI) based on Landsat images is one of the most common ways to distinguish water bodies from other land surface features. But conventional WI methods take into account spectral information only form a limited number of bands, and therefore the accuracy of those WI methods may be constrained in some areas which are covered with snow/ice, clouds, etc. An accurate and robust water extraction method is the key to the study at present. The support vector machine (SVM) using all bands spectral information can reduce for these classification error to some extent. Nevertheless, SVM which barely considers spatial information is relatively sensitive to noise in local regions. Conditional random field (CRF) which considers both spatial information and spectral information has proven to be able to compensate for these limitations. Hence, in this paper, we develop a systematic water extraction method by taking advantage of the complementarity between the SVM and a water index-guided stochastic fully-connected conditional random field (SVM-WIGSFCRF) to address the above issues. In addition, we comprehensively evaluate the reliability and accuracy of the proposed method using Landsat-8 operational land imager (OLI) images of one test site. We assess the method's performance by calculating the following accuracy metrics: Omission Errors (OE) and Commission Errors (CE); Kappa coefficient (KP) and Total Error (TE). Experimental results show that the new method can improve target detection accuracy under complex and changeable environments.

  2. A Kalman filter for a two-dimensional shallow-water model

    NASA Technical Reports Server (NTRS)

    Parrish, D. F.; Cohn, S. E.

    1985-01-01

    A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.

  3. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  4. Application of RBFN network and GM (1, 1) for groundwater level simulation

    NASA Astrophysics Data System (ADS)

    Li, Zijun; Yang, Qingchun; Wang, Luchen; Martín, Jordi Delgado

    2017-10-01

    Groundwater is a prominent resource of drinking and domestic water in the world. In this context, a feasible water resources management plan necessitates acceptable predictions of groundwater table depth fluctuations, which can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. Due to the difficulties of identifying non-linear model structure and estimating the associated parameters, in this study radial basis function neural network (RBFNN) and GM (1, 1) models are used for the prediction of monthly groundwater level fluctuations in the city of Longyan, Fujian Province (South China). The monthly groundwater level data monitored from January 2003 to December 2011 are used in both models. The error criteria are estimated using the coefficient of determination ( R 2), mean absolute error (E) and root mean squared error (RMSE). The results show that both the models can forecast the groundwater level with fairly high accuracy, but the RBFN network model can be a promising tool to simulate and forecast groundwater level since it has a relatively smaller RMSE and MAE.

  5. Experimental determination of a Viviparus contectus thermometry equation.

    PubMed

    Bugler, Melanie J; Grimes, Stephen T; Leng, Melanie J; Rundle, Simon D; Price, Gregory D; Hooker, Jerry J; Collinson, Margaret E

    2009-09-01

    Experimental measurements of the (18)O/(16)O isotope fractionation between the biogenic aragonite of Viviparus contectus (Gastropoda) and its host freshwater were undertaken to generate a species-specific thermometry equation. The temperature dependence of the fractionation factor and the relationship between Deltadelta(18)O (delta(18)O(carb.) - delta(18)O(water)) and temperature were calculated from specimens maintained under laboratory and field (collection and cage) conditions. The field specimens were grown (Somerset, UK) between August 2007 and August 2008, with water samples and temperature measurements taken monthly. Specimens grown in the laboratory experiment were maintained under constant temperatures (15 degrees C, 20 degrees C and 25 degrees C) with water samples collected weekly. Application of a linear regression to the datasets indicated that the gradients of all three experiments were within experimental error of each other (+/-2 times the standard error); therefore, a combined (laboratory and field data) correlation could be applied. The relationship between Deltadelta(18)O (delta(18)O(carb.) - delta(18)O(water)) and temperature (T) for this combined dataset is given by: T = - 7.43( + 0.87, - 1.13)*Deltadelta18O + 22.89(+/- 2.09) (T is in degrees C, delta(18)O(carb.) is with respect to Vienna Pee Dee Belemnite (VPDB) and delta(18)O(water) is with respect to Vienna Standard Mean Ocean Water (VSMOW). Quoted errors are 2 times standard error).Comparisons made with existing aragonitic thermometry equations reveal that the linear regression for the combined Viviparus contectus equation is within 2 times the standard error of previously reported aragonitic thermometry equations. This suggests there are no species-specific vital effects for Viviparus contectus. Seasonal delta(18)O(carb.) profiles from specimens retrieved from the field cage experiment indicate that during shell secretion the delta(18)O(carb.) of the shell carbonate is not influenced by size, sex or whether females contained eggs or juveniles. Copyright (c) 2009 John Wiley & Sons, Ltd.

  6. Advanced Microwave Radiometer (AMR) for SWOT mission

    NASA Astrophysics Data System (ADS)

    Chae, C. S.

    2015-12-01

    The objective of the SWOT (Surface Water & Ocean Topography) satellite mission is to measure wide-swath, high resolution ocean topography and terrestrial surface waters. Since main payload radar will use interferometric SAR technology, conventional microwave radiometer system which has single nadir look antenna beam (i.e., OSTM/Jason-2 AMR) is not ideally applicable for the mission for wet tropospheric delay correction. Therefore, SWOT AMR incorporates two antenna beams along cross track direction. In addition to the cross track design of the AMR radiometer, wet tropospheric error requirement is expressed in space frequency domain (in the sense of cy/km), in other words, power spectral density (PSD). Thus, instrument error allocation and design are being done in PSD which are not conventional approaches for microwave radiometer requirement allocation and design. A few of novel analyses include: 1. The effects of antenna beam size to PSD error and land/ocean contamination, 2. Receiver error allocation and the contributions of radiometric count averaging, NEDT, Gain variation, etc. 3. Effect of thermal design in the frequency domain. In the presentation, detailed AMR design and analyses results will be discussed.

  7. Application of an optimization algorithm to satellite ocean color imagery: A case study in Southwest Florida coastal waters

    NASA Astrophysics Data System (ADS)

    Hu, Chuanmin; Lee, Zhongping; Muller-Karger, Frank E.; Carder, Kendall L.

    2003-05-01

    A spectra-matching optimization algorithm, designed for hyperspectral sensors, has been implemented to process SeaWiFS-derived multi-spectral water-leaving radiance data. The algorithm has been tested over Southwest Florida coastal waters. The total spectral absorption and backscattering coefficients can be well partitioned with the inversion algorithm, resulting in RMS errors generally less than 5% in the modeled spectra. For extremely turbid waters that come from either river runoff or sediment resuspension, the RMS error is in the range of 5-15%. The bio-optical parameters derived in this optically complex environment agree well with those obtained in situ. Further, the ability to separate backscattering (a proxy for turbidity) from the satellite signal makes it possible to trace water movement patterns, as indicated by the total absorption imagery. The derived patterns agree with those from concurrent surface drifters. For waters where CDOM overwhelmingly dominates the optical signal, however, the procedure tends to regard CDOM as the sole source of absorption, implying the need for better atmospheric correction and for adjustment of some model coefficients for this particular region.

  8. Parameter optimization of the QUAL2K model for a multiple-reach river using an influence coefficient algorithm.

    PubMed

    Cho, Jae Heon; Ha, Sung Ryong

    2010-03-15

    An influence coefficient algorithm and a genetic algorithm (GA) were introduced to develop an automatic calibration model for QUAL2K, the latest version of the QUAL2E river and stream water-quality model. The influence coefficient algorithm was used for the parameter optimization in unsteady state, open channel flow. The GA, used in solving the optimization problem, is very simple and comprehensible yet still applicable to any complicated mathematical problem, where it can find the global-optimum solution quickly and effectively. The previously established model QUAL2Kw was used for the automatic calibration of the QUAL2K. The parameter-optimization method using the influence coefficient and genetic algorithm (POMIG) developed in this study and QUAL2Kw were each applied to the Gangneung Namdaecheon River, which has multiple reaches, and the results of the two models were compared. In the modeling, the river reach was divided into two parts based on considerations of the water quality and hydraulic characteristics. The calibration results by POMIG showed a good correspondence between the calculated and observed values for most of water-quality variables. In the application of POMIG and QUAL2Kw, relatively large errors were generated between the observed and predicted values in the case of the dissolved oxygen (DO) and chlorophyll-a (Chl-a) in the lowest part of the river; therefore, two weighting factors (1 and 5) were applied for DO and Chl-a in the lower river. The sums of the errors for DO and Chl-a with a weighting factor of 5 were slightly lower compared with the application of a factor of 1. However, with a weighting factor of 5 the sums of errors for other water-quality variables were slightly increased in comparison to the case with a factor of 1. Generally, the results of the POMIG were slightly better than those of the QUAL2Kw.

  9. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  10. Phytoplankton pigment concentrations in the Middle Atlantic Bight - Comparison of ship determinations and CZCS estimates. [Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Gordon, H. R.; Brown, J. W.; Clark, D. K.; Brown, O. B.; Evans, R. H.; Broenkow, W. W.

    1983-01-01

    The processing algorithms used for relating the apparent color of the ocean observed with the Coastal-Zone Color Scanner on Nimbus-7 to the concentration of phytoplankton pigments (principally the pigment responsible for photosynthesis, chlorophyll-a) are developed and discussed in detail. These algorithms are applied to the shelf and slope waters of the Middle Atlantic Bight and also to Sargasso Sea waters. In all, four images are examined, and the resulting pigment concentrations are compared to continuous measurements made along ship tracks. The results suggest that over the 0.08-1.5 mg/cu m range, the error in the retrieved pigment concentration is of the order of 30-40% for a variety of atmospheric turbidities. In three direct comparisons between ship-measured and satellite-retrieved values of the water-leaving radiance, the atmospheric correction algorithm retrieved the water-leaving radiance with an average error of about 10%. This atmospheric correction algorithm does not require any surface measurements for its application.

  11. On the merging of optical and SAR satellite imagery for surface water mapping applications

    NASA Astrophysics Data System (ADS)

    Markert, Kel N.; Chishtie, Farrukh; Anderson, Eric R.; Saah, David; Griffin, Robert E.

    2018-06-01

    Optical and Synthetic Aperture Radar (SAR) imagery from satellite platforms provide a means to discretely map surface water; however, the application of the two data sources in tandem has been inhibited by inconsistent data availability, the distinct physical properties that optical and SAR instruments sense, and dissimilar data delivery platforms. In this paper, we describe a preliminary methodology for merging optical and SAR data into a common data space. We apply our approach over a portion of the Mekong Basin, a region with highly variable surface water cover and persistent cloud cover, for surface water applications requiring dense time series analysis. The methods include the derivation of a representative index from both sensors that transforms data from disparate physical units (reflectance and backscatter) to a comparable dimensionless space applying a consistent water extraction approach to both datasets. The merging of optical and SAR data allows for increased observations in cloud prone regions that can be used to gain additional insight into surface water dynamics or flood mapping applications. This preliminary methodology shows promise for a common optical-SAR water extraction; however, data ranges and thresholding values can vary depending on data source, yielding classification errors in the resulting surface water maps. We discuss some potential future approaches to address these inconsistencies.

  12. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS estimates for improved data assimilation results.

  13. Global modeling of land water and energy balances. Part I: The land dynamics (LaD) model

    USGS Publications Warehouse

    Milly, P.C.D.; Shmakin, A.B.

    2002-01-01

    A simple model of large-scale land (continental) water and energy balances is presented. The model is an extension of an earlier scheme with a record of successful application in climate modeling. The most important changes from the original model include 1) introduction of non-water-stressed stomatal control of transpiration, in order to correct a tendency toward excessive evaporation: 2) conversion from globally constant parameters (with the exception of vegetation-dependent snow-free surface albedo) to more complete vegetation and soil dependence of all parameters, in order to provide more realistic representation of geographic variations in water and energy balances and to enable model-based investigations of land-cover change; 3) introduction of soil sensible heat storage and transport, in order to move toward realistic diurnal-cycle modeling; 4) a groundwater (saturated-zone) storage reservoir, in order to provide more realistic temporal variability of runoff; and 5) a rudimentary runoff-routing scheme for delivery of runoff to the ocean, in order to provide realistic freshwater forcing of the ocean general circulation model component of a global climate model. The new model is tested with forcing from the International Satellite Land Surface Climatology Project Initiative I global dataset and a recently produced observation-based water-balance dataset for major river basins of the world. Model performance is evaluated by comparing computed and observed runoff ratios from many major river basins of the world. Special attention is given to distinguishing between two components of the apparent runoff ratio error: the part due to intrinsic model error and the part due to errors in the assumed precipitation forcing. The pattern of discrepancies between modeled and observed runoff ratios is consistent with results from a companion study of precipitation estimation errors. The new model is tuned by adjustment of a globally constant scale factor for non-water-stressed stomatal resistance. After tuning, significant overestimation of runoff is found in environments where an overall arid climate includes a brief but intense wet season. It is shown that this error may be explained by the neglect of upward soil water diffusion from below the root zone during the dry season. With the exception of such basins, and in the absence of precipitation errors. It is estimated that annual runoff ratios simulated by the model would have a root-mean-square error of about 0.05. The new model matches observations better than its predecessor, which has a negative runoff bias and greater scatter.

  14. A passive microwave technique for estimating rainfall and vertical structure information from space. Part 2: Applications to SSM/I data

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Giglio, Louis

    1994-01-01

    A multi channel physical approach for retrieving rainfall and its vertical structure from Special Sensor Microwave/Imager (SSM/I) observations is examined. While a companion paper was devoted exclusively to the description of the algorithm, its strengths, and its limitations, the main focus of this paper is to report on the results, applicability, and expected accuraciesfrom this algorithm. Some examples are given that compare retrieved results with ground-based radar data from different geographical regions to illustrate the performance and utility of the algorithm under distinct rainfall conditions. More quantitative validation is accomplished using two months of radar data from Darwin, Australia, and the radar network over Japan. Instantaneous comparisons at Darwin indicate that root-mean-square errors for 1.25 deg areas over water are 0.09 mm/h compared to the mean rainfall value of 0.224 mm/h while the correlation exceeds 0.9. Similar results are obtained over the Japanese validation site with rms errors of 0.615 mm/h compared to the mean of 0.0880 mm/h and a correlation of 0.9. Results are less encouraging over land with root-mean-square errors somewhat larger than the mean rain rates and correlations of only 0.71 and 0.62 for Darwin and Japan, respectively. These validation studies are further used in combination with the theoretical treatment of expected accuracies developed in the companion paper to define error estimates on a broader scale than individual radar sites from which the errors may be analyzed. Comparisons with simpler techniques that are based on either emission or scattering measurements are used to illustrate the fact that the current algorithm, while better correlated with the emission methods over water, cannot be reduced to either of these simpler methods.

  15. Automated quantification of surface water inundation in wetlands using optical satellite imagery

    USGS Publications Warehouse

    DeVries, Ben; Huang, Chengquan; Lang, Megan W.; Jones, John W.; Huang, Wenli; Creed, Irena F.; Carroll, Mark L.

    2017-01-01

    We present a fully automated and scalable algorithm for quantifying surface water inundation in wetlands. Requiring no external training data, our algorithm estimates sub-pixel water fraction (SWF) over large areas and long time periods using Landsat data. We tested our SWF algorithm over three wetland sites across North America, including the Prairie Pothole Region, the Delmarva Peninsula and the Everglades, representing a gradient of inundation and vegetation conditions. We estimated SWF at 30-m resolution with accuracies ranging from a normalized root-mean-square-error of 0.11 to 0.19 when compared with various high-resolution ground and airborne datasets. SWF estimates were more sensitive to subtle inundated features compared to previously published surface water datasets, accurately depicting water bodies, large heterogeneously inundated surfaces, narrow water courses and canopy-covered water features. Despite this enhanced sensitivity, several sources of errors affected SWF estimates, including emergent or floating vegetation and forest canopies, shadows from topographic features, urban structures and unmasked clouds. The automated algorithm described in this article allows for the production of high temporal resolution wetland inundation data products to support a broad range of applications.

  16. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  17. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    PubMed

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  18. Theoretical and experimental errors for in situ measurements of plant water potential.

    PubMed

    Shackel, K A

    1984-07-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.

  19. Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1

    PubMed Central

    Shackel, Kenneth A.

    1984-01-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701

  20. An optical water type framework for selecting and blending retrievals from bio-optical algorithms in lakes and coastal waters.

    PubMed

    Moore, Timothy S; Dowell, Mark D; Bradt, Shane; Verdu, Antonio Ruiz

    2014-03-05

    Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll- a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll- a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll- a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll- a algorithms-the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands-with RMS error of 0.416 and 0.437 for each in log chlorophyll- a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll- a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll- a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie.

  1. An optical water type framework for selecting and blending retrievals from bio-optical algorithms in lakes and coastal waters

    PubMed Central

    Moore, Timothy S.; Dowell, Mark D.; Bradt, Shane; Verdu, Antonio Ruiz

    2014-01-01

    Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll-a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll-a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll-a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll-a algorithms—the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands—with RMS error of 0.416 and 0.437 for each in log chlorophyll-a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll-a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll-a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie. PMID:24839311

  2. Analytical calculation of electrolyte water content of a Proton Exchange Membrane Fuel Cell for on-board modelling applications

    NASA Astrophysics Data System (ADS)

    Ferrara, Alessandro; Polverino, Pierpaolo; Pianese, Cesare

    2018-06-01

    This paper proposes an analytical model of the water content of the electrolyte of a Proton Exchange Membrane Fuel Cell. The model is designed by accounting for several simplifying assumptions, which make the model suitable for on-board/online water management applications, while ensuring a good accuracy of the considered phenomena, with respect to advanced numerical solutions. The achieved analytical solution, expressing electrolyte water content, is compared with that obtained by means of a complex numerical approach, used to solve the same mathematical problem. The achieved results show that the mean error is below 5% for electrodes water content values ranging from 2 to 15 (given as boundary conditions), and it does not overcome 0.26% for electrodes water content above 5. These results prove the capability of the solution to correctly model electrolyte water content at any operating condition, aiming at embodiment into more complex frameworks (e.g., cell or stack models), related to fuel cell simulation, monitoring, control, diagnosis and prognosis.

  3. Analysis of counting errors in the phase/Doppler particle analyzer

    NASA Technical Reports Server (NTRS)

    Oldenburg, John R.

    1987-01-01

    NASA is investigating the application of the Phase Doppler measurement technique to provide improved drop sizing and liquid water content measurements in icing research. The magnitude of counting errors were analyzed because these errors contribute to inaccurate liquid water content measurements. The Phase Doppler Particle Analyzer counting errors due to data transfer losses and coincidence losses were analyzed for data input rates from 10 samples/sec to 70,000 samples/sec. Coincidence losses were calculated by determining the Poisson probability of having more than one event occurring during the droplet signal time. The magnitude of the coincidence loss can be determined, and for less than a 15 percent loss, corrections can be made. The data transfer losses were estimated for representative data transfer rates. With direct memory access enabled, data transfer losses are less than 5 percent for input rates below 2000 samples/sec. With direct memory access disabled losses exceeded 20 percent at a rate of 50 samples/sec preventing accurate number density or mass flux measurements. The data transfer losses of a new signal processor were analyzed and found to be less than 1 percent for rates under 65,000 samples/sec.

  4. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  5. Towards an Australian ensemble streamflow forecasting system for flood prediction and water management

    NASA Astrophysics Data System (ADS)

    Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.

    2016-12-01

    Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.

  6. Uncertainty in Ecohydrological Modeling in an Arid Region Determined with Bayesian Methods

    PubMed Central

    Yang, Junjun; He, Zhibin; Du, Jun; Chen, Longfei; Zhu, Xi

    2016-01-01

    In arid regions, water resources are a key forcing factor in ecosystem circulation, and soil moisture is the critical link that constrains plant and animal life on the soil surface and underground. Simulation of soil moisture in arid ecosystems is inherently difficult due to high variability. We assessed the applicability of the process-oriented CoupModel for forecasting of soil water relations in arid regions. We used vertical soil moisture profiling for model calibration. We determined that model-structural uncertainty constituted the largest error; the model did not capture the extremes of low soil moisture in the desert-oasis ecotone (DOE), particularly below 40 cm soil depth. Our results showed that total uncertainty in soil moisture prediction was improved when input and output data, parameter value array, and structure errors were characterized explicitly. Bayesian analysis was applied with prior information to reduce uncertainty. The need to provide independent descriptions of uncertainty analysis (UA) in the input and output data was demonstrated. Application of soil moisture simulation in arid regions will be useful for dune-stabilization and revegetation efforts in the DOE. PMID:26963523

  7. Preventing medical errors by designing benign failures.

    PubMed

    Grout, John R

    2003-07-01

    One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.

  8. Definition of boundary and initial conditions in the analysis of saturated ground-water flow systems - An introduction

    USGS Publications Warehouse

    Franke, O. Lehn; Reilly, Thomas E.; Bennett, Gordon D.

    1987-01-01

    Accurate definition of boundary and initial conditions is an essential part of conceptualizing and modeling ground-water flow systems. This report describes the properties of the seven most common boundary conditions encountered in ground-water systems and discusses major aspects of their application. It also discusses the significance and specification of initial conditions and evaluates some common errors in applying this concept to ground-water-system models. An appendix is included that discusses what the solution of a differential equation represents and how the solution relates to the boundary conditions defining the specific problem. This report considers only boundary conditions that apply to saturated ground-water systems.

  9. Precipitation and Diabatic Heating Distributions from TRMM/GPM

    NASA Astrophysics Data System (ADS)

    Olson, W. S.; Grecu, M.; Wu, D.; Tao, W. K.; L'Ecuyer, T.; Jiang, X.

    2016-12-01

    The initial focus of our research effort was the development of a physically-based methodology for estimating 3D precipitation distributions from a combination of spaceborne radar and passive microwave radiometer observations. This estimation methodology was originally developed for applications to Global Precipitation Measurement (GPM) mission sensor data, but it has recently been adapted to Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar and Microwave Imager observations. Precipitation distributions derived from the TRMM sensors are interpreted using cloud-system resolving model simulations to infer atmospheric latent+eddy heating (Q1-QR) distributions in the tropics and subtropics. Further, the estimates of Q1-QR are combined with estimates of radiative heating (QR), derived from TRMM Microwave Imager and Visible and Infrared Scanner data as well as environmental properties from NCEP reanalyses, to yield estimates of the large-scale total diabatic heating (Q1). A thirteen-year database of precipitation and diabatic heating is constructed using TRMM observations from 1998-2010 as part of NASA's Energy and Water cycle Study program. State-dependent errors in precipitation and heating products are evaluated by propagating the potential errors of a priori modeling assumptions through the estimation method framework. Knowledge of these errors is critical for determining the "closure" of global water and energy budgets. Applications of the precipitation/heating products to climate studies will be presented at the conference.

  10. MODPATH-LGR; documentation of a computer program for particle tracking in shared-node locally refined grids by using MODFLOW-LGR

    USGS Publications Warehouse

    Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.

  11. Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.

    2006-01-01

    Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.

  12. The problem with simple lumped parameter models: Evidence from tritium mean transit times

    NASA Astrophysics Data System (ADS)

    Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr

    2017-04-01

    Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.

  13. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  14. Characterizing water surface elevation under different flow conditions for the upcoming SWOT mission

    NASA Astrophysics Data System (ADS)

    Domeneghetti, A.; Schumann, G. J.-P.; Frasson, R. P. M.; Wei, R.; Pavelsky, T. M.; Castellarin, A.; Brath, A.; Durand, M. T.

    2018-06-01

    The Surface Water and Ocean Topography satellite mission (SWOT), scheduled for launch in 2021, will deliver two-dimensional observations of water surface heights for lakes, rivers wider than 100 m and oceans. Even though the scientific literature has highlighted several fields of application for the expected products, detailed simulations of the SWOT radar performance for a realistic river scenario have not been presented in the literature. Understanding the error of the most fundamental "raw" SWOT hydrology product is important in order to have a greater awareness about strengths and limits of the forthcoming satellite observations. This study focuses on a reach (∼140 km in length) of the middle-lower portion of the Po River, in Northern Italy, and, to date, represents one of the few real-case analyses of the spatial patterns in water surface elevation accuracy expected from SWOT. The river stretch is characterized by a main channel varying from 100 to 500 m in width and a large floodplain (up to 5 km) delimited by a system of major embankments. The simulation of the water surface along the Po River for different flow conditions (high, low and mean annual flows) is performed with inputs from a quasi-2D model implemented using detailed topographic and bathymetric information (LiDAR, 2 m resolution). By employing a simulator that mimics many SWOT satellite sensor characteristics and generates proxies of the remotely sensed hydrometric data, this study characterizes the spatial observations potentially provided by SWOT. We evaluate SWOT performance under different hydraulic conditions and assess possible effects of river embankments, river width, river topography and distance from the satellite ground track. Despite analyzing errors from the raw radar pixel cloud, which receives minimal processing, the present study highlights the promising potential of this Ka-band interferometer for measuring water surface elevations, with mean elevation errors of 0.1 cm and 21 cm for high and low flows, respectively. Results of the study characterize the expected performance of the upcoming SWOT mission and provide additional insights into potential applications of SWOT observations.

  15. [Applicability of agricultural production systems simulator (APSIM) in simulating the production and water use of wheat-maize continuous cropping system in North China Plain].

    PubMed

    Wang, Lin; Zheng, You-fei; Yu, Qiang; Wang, En-li

    2007-11-01

    The Agricultural Production Systems Simulator (APSIM) was applied to simulate the 1999-2001 field experimental data and the 2002-2003 water use data at the Yucheng Experiment Station under Chinese Ecosystem Research Network, aimed to verify the applicability of the model to the wheat-summer maize continuous cropping system in North China Plain. The results showed that the average errors of the simulations of leaf area index (LAI), biomass, and soil moisture content in 1999-2000 and 2000-2001 field experiments were 27.61%, 24.59% and 7.68%, and 32.65%, 35.95% and 10.26%, respectively, and those of LAI and biomass on the soils with high and low moisture content in 2002-2003 were 26.65% and 14.52%, and 23.91% and 27.93%, respectively. The simulations of LAI and biomass accorded well with the measured values, with the coefficients of determination being > 0.85 in 1999-2000 and 2002-2003, and 0.78 in 2000-2001, indicating that APSIM had a good applicability in modeling the crop biomass and soil moisture content in the continuous cropping system, but the simulation error of LAI was a little larger.

  16. Some Insights of Spectral Optimization in Ocean Color Inversion

    NASA Technical Reports Server (NTRS)

    Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert

    2011-01-01

    In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.

  17. A robust Multi-Band Water Index (MBWI) for automated extraction of surface water from Landsat 8 OLI imagery

    NASA Astrophysics Data System (ADS)

    Wang, Xiaobiao; Xie, Shunping; Zhang, Xueliang; Chen, Cheng; Guo, Hao; Du, Jinkang; Duan, Zheng

    2018-06-01

    Surface water is vital resources for terrestrial life, while the rapid development of urbanization results in diverse changes in sizes, amounts, and quality of surface water. To accurately extract surface water from remote sensing imagery is very important for water environment conservations and water resource management. In this study, a new Multi-Band Water Index (MBWI) for Landsat 8 Operational Land Imager (OLI) images is proposed by maximizing the spectral difference between water and non-water surfaces using pure pixels. Based on the MBWI map, the K-means cluster method is applied to automatically extract surface water. The performance of MBWI is validated and compared with six widely used water indices in 29 sites of China. Results show that our proposed MBWI performs best with the highest accuracy in 26 out of the 29 test sites. Compared with other water indices, the MBWI results in lower mean water total errors by a range of 9.31%-25.99%, and higher mean overall accuracies and kappa coefficients by 0.87%-3.73% and 0.06-0.18, respectively. It is also demonstrated for MBWI in terms of robustly discriminating surface water from confused backgrounds that are usually sources of surface water extraction errors, e.g., mountainous shadows and dark built-up areas. In addition, the new index is validated to be able to mitigate the seasonal and daily influences resulting from the variations of the solar condition. MBWI holds the potential to be a useful surface water extraction technology for water resource studies and applications.

  18. Integrating Satellite and Surface Sensor Networks for Irrigation Management Applications in California

    NASA Astrophysics Data System (ADS)

    Melton, F. S.; Johnson, L.; Post, K. M.; Guzman, A.; Zaragoza, I.; Spellenberg, R.; Rosevelt, C.; Michaelis, A.; Nemani, R. R.; Cahn, M.; Frame, K.; Temesgen, B.; Eching, S.

    2016-12-01

    Satellite mapping of evapotranspiration (ET) from irrigated agricultural lands can provide agricultural producers and water managers with information that can be used to optimize agricultural water use, especially in regions with limited water supplies. The timely delivery of information on agricultural crop water requirements has the potential to make irrigation scheduling more practical, convenient, and accurate. We present a system for irrigation scheduling and management support in California and describe lessons learned from the development and implementation of the system. The Satellite Irrigation Management Support (SIMS) framework integrates satellite data with information from agricultural weather networks to map crop canopy development, basal crop coefficients (Kcb), and basal crop evapotranspiration (ETcb) at the scale of individual fields. Information is distributed to agricultural producers and water managers via a web-based irrigation management decision support system and web data services. SIMS also provides an application programming interface (API) that facilitates integration with other irrigation decision support tools, estimation of total crop evapotranspiration (ETc) and calculation of on-farm water use efficiency metrics. Accuracy assessments conducted in commercial fields for more than a dozen crop types to date have shown that SIMS seasonal ETcb estimates are within 10% mean absolute error (MAE) for well-watered crops and within 15% across all crop types studied, and closely track daily ETc and running totals of ETc measured in each field. Use of a soil water balance model to correct for soil evaporation and crop water stress reduces this error to less than 8% MAE across all crop types studied to date relative to field measurements of ETc. Results from irrigation trials conducted by the project for four vegetable crops have also demonstrated the potential for use of ET-based irrigation management strategies to reduce total applied water by 20-40% relative to grower standard practices while maintaining crop yields and quality.

  19. Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.

    PubMed

    Barrs, H D

    1965-07-02

    A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.

  20. Understanding virtual water flows: A multiregion input-output case study of Victoria

    NASA Astrophysics Data System (ADS)

    Lenzen, Manfred

    2009-09-01

    This article explains and interprets virtual water flows from the well-established perspective of input-output analysis. Using a case study of the Australian state of Victoria, it demonstrates that input-output analysis can enumerate virtual water flows without systematic and unknown truncation errors, an issue which has been largely absent from the virtual water literature. Whereas a simplified flow analysis from a producer perspective would portray Victoria as a net virtual water importer, enumerating the water embodiments across the full supply chain using input-output analysis shows Victoria as a significant net virtual water exporter. This study has succeeded in informing government policy in Australia, which is an encouraging sign that input-output analysis will be able to contribute much value to other national and international applications.

  1. Estimated Ground-Water Withdrawals from the Death Valley Regional Flow System, Nevada and California, 1913-98

    USGS Publications Warehouse

    Moreo, Michael T.; Halford, Keith J.; La Camera, Richard J.; Laczniak, Randell J.

    2003-01-01

    Ground-water withdrawals from 1913 through 1998 from the Death Valley regional flow system have been compiled to support a regional, three-dimensional, transient ground-water flow model. Withdrawal locations and depths of production intervals were estimated and associated errors were reported for 9,300 wells. Withdrawals were grouped into three categories: mining, public-supply, and commercial water use; domestic water use; and irrigation water use. In this report, groupings were based on the method used to estimate pumpage. Cumulative ground-water withdrawals from 1913 through 1998 totaled 3 million acre-feet, most of which was used to irrigate alfalfa. Annual withdrawal for irrigation ranged from 80 to almost 100 percent of the total pumpage. About 75,000 acre-feet was withdrawn for irrigation in 1998. Annual irrigation withdrawals generally were estimated as the product of irrigated acreage and application rate. About 320 fields totaling 11,000 acres were identified in six hydrographic areas. Annual application rates for high water-use crops ranged from 5 feet in Penoyer Valley to 9 feet in Pahrump Valley. The uncertainty in the estimates of ground-water withdrawals was attributed primarily to the uncertainty of application rate estimates. Annual ground-water withdrawal was estimated at about 90,000 acre-feet in 1998 with an assigned uncertainty bounded by 60,000 to 130,000 acre-feet.

  2. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  3. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  4. Density Functional Theory Calculation of pKa's of Thiols in Aqueous Solution Using Explicit Water Molecules and the Polarizable Continuum Model.

    PubMed

    Thapa, Bishnu; Schlegel, H Bernhard

    2016-07-21

    The pKa's of substituted thiols are important for understanding their properties and reactivities in applications in chemistry, biochemistry, and material chemistry. For a collection of 175 different density functionals and the SMD implicit solvation model, the average errors in the calculated pKa's of methanethiol and ethanethiol are almost 10 pKa units higher than for imidazole. A test set of 45 substituted thiols with pKa's ranging from 4 to 12 has been used to assess the performance of 8 functionals with 3 different basis sets. As expected, the basis set needs to include polarization functions on the hydrogens and diffuse functions on the heavy atoms. Solvent cavity scaling was ineffective in correcting the errors in the calculated pKa's. Inclusion of an explicit water molecule that is hydrogen bonded with the H of the thiol group (in neutral) or S(-) (in thiolates) lowers error by an average of 3.5 pKa units. With one explicit water and the SMD solvation model, pKa's calculated with the M06-2X, PBEPBE, BP86, and LC-BLYP functionals are found to deviate from the experimental values by about 1.5-2.0 pKa units whereas pKa's with the B3LYP, ωB97XD and PBEVWN5 functionals are still in error by more than 3 pKa units. The inclusion of three explicit water molecules lowers the calculated pKa further by about 4.5 pKa units. With the B3LYP and ωB97XD functionals, the calculated pKa's are within one unit of the experimental values whereas most other functionals used in this study underestimate the pKa's. This study shows that the ωB97XD functional with the 6-31+G(d,p) and 6-311++G(d,p) basis sets, and the SMD solvation model with three explicit water molecules hydrogen bonded to the sulfur produces the best result for the test set (average error -0.11 ± 0.50 and +0.15 ± 0.58, respectively). The B3LYP functional also performs well (average error -1.11 ± 0.82 and -0.78 ± 0.79, respectively).

  5. Modeling Water Temperature in the Yakima River, Washington, from Roza Diversion Dam to Prosser Dam, 2005-06

    USGS Publications Warehouse

    Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.

    2008-01-01

    A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).

  6. Cl app: android-based application program for monitoring the residue chlorine in water

    NASA Astrophysics Data System (ADS)

    Intaravanne, Yuttana; Sumriddetchkajorn, Sarun; Porntheeraphat, Supanit; Chaitavon, Kosom; Vuttivong, Sirajit

    2015-07-01

    A farmer usually uses a cheap chemical material called chlorine to destroy the cell structure of unwanted organisms and remove some plant effluents in a baby shrimp farm. A color changing of the reaction between chlorine and chemical indicator is used to monitor the residue chlorine in water before releasing a baby shrimp into a pond. To get rid of the error in color reading, our previous works showed how a smartphone can be functioned as a color reader for estimating the chlorine concentration in water. In this paper, we show the improvement of interior configuration of our prototype and the distribution to several baby shrimp farms. In the future, we plan to make it available worldwide through the online market as well as to develop more application programs for monitoring other chemical substances.

  7. Assimilation of flood extent data with 2D flood inundation models for localised intense rainfall events

    NASA Astrophysics Data System (ADS)

    Neal, J. C.; Wood, M.; Bermúdez, M.; Hostache, R.; Freer, J. E.; Bates, P. D.; Coxon, G.

    2017-12-01

    Remote sensing of flood inundation extent has long been a potential source of data for constraining and correcting simulations of floodplain inundation. Hydrodynamic models and the computing resources to run them have developed to the extent that simulation of flood inundation in two-dimensional space is now feasible over large river basins in near real-time. However, despite substantial evidence that there is useful information content within inundation extent data, even from low resolution SAR such as that gathered by Envisat ASAR in wide swath mode, making use of the information in a data assimilation system has proved difficult. He we review recent applications of the Ensemble Kalman Filter (EnKF) and Particle Filter for assimilating SAR data, with a focus on the River Severn UK and compare these with complementary research that has looked at the internal error sources and boundary condition errors using detailed terrestrial data that is not available in most locations. Previous applications of the EnKF to this reach have focused on upstream boundary conditions as the source of flow error, however this description of errors was too simplistic for the simulation of summer flood events where localised intense rainfall can be substantial. Therefore, we evaluate the introduction of uncertain lateral inflows to the ensemble. A further limitation of the existing EnKF based methods is the need to convert flood extent to water surface elevations by intersecting the shoreline location with a high quality digital elevation model (e.g. LiDAR). To simplify this data processing step, we evaluate a method to directly assimilate inundation extent as a EnKF model state rather than assimilating water heights, potentially allowing the scheme to be used where high-quality terrain data are sparse.

  8. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.

  9. Understanding Effective Diameter and Its Application to Terrestrial Radiation in Ice Clouds

    NASA Technical Reports Server (NTRS)

    Mitchell, D. L.; Lawson, R. P.; Baker, B.

    2011-01-01

    The cloud property known as "effective diameter" or "effective radius", which in essence is the cloud particle size distribution (PSD) volume at bulk density divided by its projected area, is used extensively in atmospheric radiation transfer, climate modeling and remote sensing. This derives from the assumption that PSD optical properties can be uniquely described in terms of their effective diameter, D(sub e), and their cloud water content (CWC), henceforth referred to as the D(sub e)-CWC assumption. This study challenges this assumption, showing that while the D(sub e)-CWC assumption appears generally valid for liquid water clouds, it appears less valid for ice clouds in regions where (1) absorption is not primarily a function of either the PSD ice water content (IWC) or the PSD projected area, and (2) where wave resonance (i.e. photon tunneling) contributes significantly to absorption. These two regions often strongly coincide at terrestrial wavelengths when De less than 60 m, which is where this D(sub e)-CWC assumption appears poorest. Treating optical properties solely in terms of D(sub e) and IWC may lead to errors up to 24%, 26% and 20% for terrestrial radiation in the window region regarding the absorption and extinction coefficients and the single scattering albedo, respectively. Outside the window region, errors may reach 33% and 42% regarding absorption and extinction. The magnitude and sign of these errors can change rapidly with wavelength, which may produce significant errors in climate modeling, remote sensing and other applications concerned with the wavelength dependence of radiation. Where the D(sub e)-CWC assumption breaks down, ice cloud optical properties appear to depend on D(sub e), IWC and the PSD shape. Optical property parameterizations in climate models and remote sensing algorithms based on historical PSD measurements may exhibit errors due to previously unknown PSD errors (i.e. the presence of ice artifacts due to the shattering of larger ice particles on the probe inlet tube during sampling). More recently developed cloud probes are designed to mitigate this shattering problem. Using realistic PSD shapes for a given temperature (and/or IWC) and cloud type may minimize errors associated with PSD shape in ice optics parameterizations and remote sensing algorithms. While this topic was investigated using two ice optics schemes (the Yang et al., 2005 database and the modified anomalous diffraction approximation, or MADA), a physical understanding of the limitations of the D(sub e)-IWC assumption was made possible by using MADA. MADA allows one to approximate the contribution of photon tunneling to absorption relative to other optical processes, which reveals that part of the error regarding the D(sub e)-IWC assumption can be associated with tunneling. By relating the remaining error to the radiation penetration depth in bulk ice (DELTA L) due to absorption, the domain where the D(sub e)-IWC assumption is weakest was described in terms of D(sub e) and DELTA L.

  10. Understanding effective diameter and its application to terrestrial radiation in ice clouds

    NASA Astrophysics Data System (ADS)

    Mitchell, D. L.; Lawson, R. P.; Baker, B.

    2010-12-01

    The cloud property known as "effective diameter" or "effective radius", which in essence is the cloud particle size distribution (PSD) volume at bulk density divided by its projected area, is used extensively in atmospheric radiation transfer, climate modeling and remote sensing. This derives from the assumption that PSD optical properties can be uniquely described in terms of their effective diameter, De, and their cloud water content (CWC), henceforth referred to as the De-CWC assumption. This study challenges this assumption, showing that while the De-CWC assumption appears generally valid for liquid water clouds, it appears less valid for ice clouds in regions where (1) absorption is not primarily a function of either the PSD ice water content (IWC) or the PSD projected area, and (2) where wave resonance (i.e. photon tunneling) contributes significantly to absorption. These two regions often strongly coincide at terrestrial wavelengths when De<∼60 μm, which is where this De-CWC assumption appears poorest. Treating optical properties solely in terms of De and IWC may lead to errors up to 24%, 26% and 20% for terrestrial radiation in the window region regarding the absorption and extinction coefficients and the single scattering albedo, respectively. Outside the window region, errors may reach 33% and 42% regarding absorption and extinction. The magnitude and sign of these errors can change rapidly with wavelength, which may produce significant errors in climate modeling, remote sensing and other applications concerned with the wavelength dependence of radiation. Where the De-CWC assumption breaks down, ice cloud optical properties appear to depend on De, IWC and the PSD shape. Optical property parameterizations in climate models and remote sensing algorithms based on historical PSD measurements may exhibit errors due to previously unknown PSD errors (i.e. the presence of ice artifacts due to the shattering of larger ice particles on the probe inlet tube during sampling). More recently developed cloud probes are designed to mitigate this shattering problem. Using realistic PSD shapes for a given temperature (and/or IWC) and cloud type may minimize errors associated with PSD shape in ice optics parameterizations and remote sensing algorithms. While this topic was investigated using two ice optics schemes (the Yang et al. (2005) database and the modified anomalous diffraction approximation, or MADA), a physical understanding of the limitations of the De-IWC assumption was made possible by using MADA. MADA allows one to separate the photon tunneling process from the other optical processes, which reveals that much of the error regarding the De-IWC assumption can be associated with tunneling. By relating the remaining error to the radiation penetration depth in bulk ice (ΔL) due to absorption, the domain where the De-IWC assumption is weakest was described in terms of De and ΔL.

  11. Using computational modeling of river flow with remotely sensed data to infer channel bathymetry

    USGS Publications Warehouse

    Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.

    2012-01-01

    As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.

  12. Microwave Resonator Measurements of Atmospheric Absorption Coefficients: A Preliminary Design Study

    NASA Technical Reports Server (NTRS)

    Walter, Steven J.; Spilker, Thomas R.

    1995-01-01

    A preliminary design study examined the feasibility of using microwave resonator measurements to improve the accuracy of atmospheric absorption coefficients and refractivity between 18 and 35 GHz. Increased accuracies would improve the capability of water vapor radiometers to correct for radio signal delays caused by Earth's atmosphere. Calibration of delays incurred by radio signals traversing the atmosphere has applications to both deep space tracking and planetary radio science experiments. Currently, the Cassini gravity wave search requires 0.8-1.0% absorption coefficient accuracy. This study examined current atmospheric absorption models and estimated that current model accuracy ranges from 5% to 7%. The refractivity of water vapor is known to 1% accuracy, while the refractivity of many dry gases (oxygen, nitrogen, etc.) are known to better than 0.1%. Improvements to the current generation of models will require that both the functional form and absolute absorption of the water vapor spectrum be calibrated and validated. Several laboratory techniques for measuring atmospheric absorption and refractivity were investigated, including absorption cells, single and multimode rectangular cavity resonators, and Fabry-Perot resonators. Semi-confocal Fabry-Perot resonators were shown to provide the most cost-effective and accurate method of measuring atmospheric gas refractivity. The need for accurate environmental measurement and control was also addressed. A preliminary design for the environmental control and measurement system was developed to aid in identifying significant design issues. The analysis indicated that overall measurement accuracy will be limited by measurement errors and imprecise control of the gas sample's thermodynamic state, thermal expansion and vibration- induced deformation of the resonator structure, and electronic measurement error. The central problem is to identify systematic errors because random errors can be reduced by averaging. Calibrating the resonator measurements by checking the refractivity of dry gases which are known to better than 0.1% provides a method of controlling the systematic errors to 0.1%. The primary source of error in absorptivity and refractivity measurements is thus the ability to measure the concentration of water vapor in the resonator path. Over the whole thermodynamic range of interest the accuracy of water vapor measurement is 1.5%. However, over the range responsible for most of the radio delay (i.e. conditions in the bottom two kilometers of the atmosphere) the accuracy of water vapor measurements ranges from 0.5% to 1.0%. Therefore the precision of the resonator measurements could be held to 0.3% and the overall absolute accuracy of resonator-based absorption and refractivity measurements will range from 0.6% to 1.

  13. Methods of Fitting a Straight Line to Data: Examples in Water Resources

    USGS Publications Warehouse

    Hirsch, Robert M.; Gilroy, Edward J.

    1984-01-01

    Three methods of fitting straight lines to data are described and their purposes are discussed and contrasted in terms of their applicability in various water resources contexts. The three methods are ordinary least squares (OLS), least normal squares (LNS), and the line of organic correlation (OC). In all three methods the parameters are based on moment statistics of the data. When estimation of an individual value is the objective, OLS is the most appropriate. When estimation of many values is the objective and one wants the set of estimates to have the appropriate variance, then OC is most appropriate. When one wishes to describe the relationship between two variables and measurement error is unimportant, then OC is most appropriate. Where the error is important in descriptive problems or in calibration problems, then structural analysis techniques may be most appropriate. Finally, if the problem is one of describing some geographic trajectory, then LNS is most appropriate.

  14. Using inferential sensors for quality control of Everglades Depth Estimation Network water-level data

    USGS Publications Warehouse

    Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul

    2016-09-29

    The Everglades Depth Estimation Network (EDEN), with over 240 real-time gaging stations, provides hydrologic data for freshwater and tidal areas of the Everglades. These data are used to generate daily water-level and water-depth maps of the Everglades that are used to assess biotic responses to hydrologic change resulting from the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. The generation of EDEN daily water-level and water-depth maps is dependent on high quality real-time data from water-level stations. Real-time data are automatically checked for outliers by assigning minimum and maximum thresholds for each station. Small errors in the real-time data, such as gradual drift of malfunctioning pressure transducers, are more difficult to immediately identify with visual inspection of time-series plots and may only be identified during on-site inspections of the stations. Correcting these small errors in the data often is time consuming and water-level data may not be finalized for several months. To provide daily water-level and water-depth maps on a near real-time basis, EDEN needed an automated process to identify errors in water-level data and to provide estimates for missing or erroneous water-level data.The Automated Data Assurance and Management (ADAM) software uses inferential sensor technology often used in industrial applications. Rather than installing a redundant sensor to measure a process, such as an additional water-level station, inferential sensors, or virtual sensors, were developed for each station that make accurate estimates of the process measured by the hard sensor (water-level gaging station). The inferential sensors in the ADAM software are empirical models that use inputs from one or more proximal stations. The advantage of ADAM is that it provides a redundant signal to the sensor in the field without the environmental threats associated with field conditions at stations (flood or hurricane, for example). In the event that a station does malfunction, ADAM provides an accurate estimate for the period of missing data. The ADAM software also is used in the quality assurance and quality control of the data. The virtual signals are compared to the real-time data, and if the difference between the two signals exceeds a certain tolerance, corrective action to the data and (or) the gaging station can be taken. The ADAM software is automated so that, each morning, the real-time EDEN data are compared to the inferential sensor signals and digital reports highlighting potential erroneous real-time data are generated for appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.

  15. Watershed Scale Analysis of Groundwater Surface Water Interactions and Its Application to Conjunctive Management under Climatic and Anthropogenic Stresses over the US Sunbelt

    NASA Astrophysics Data System (ADS)

    Seo, Seung Beom

    Although water is one of the most essential natural resources, human activities have been exerting pressure on water resources. In order to reduce these stresses on water resources, two key issues threatening water resources sustainability - interaction between surface water and groundwater resources and groundwater withdrawal impacts of streamflow depletion - were investigated in this study. First, a systematic decomposition procedure was proposed for quantifying the errors arising from various sources in the model chain in projecting the changes in hydrologic attributes using near-term climate change projections. Apart from the unexplained changes by GCMs, the process of customizing GCM projections to watershed scale through a model chain - spatial downscaling, temporal disaggregation and hydrologic model - also introduces errors, thereby limiting the ability to explain the observed changes in hydrologic variability. Towards this, we first propose metrics for quantifying the errors arising from different steps in the model chain in explaining the observed changes in hydrologic variables (streamflow, groundwater). The proposed metrics are then evaluated using a detailed retrospective analyses in projecting the changes in streamflow and groundwater attributes in four target basins that span across a diverse hydroclimatic regimes over the US Sunbelt. Our analyses focused on quantifying the dominant sources of errors in projecting the changes in eight hydrologic variables - mean and variability of seasonal streamflow, mean and variability of 3-day peak seasonal streamflow, mean and variability of 7-day low seasonal streamflow and mean and standard deviation of groundwater depth - over four target basins using an Penn state Integrated Hydrologic Model (PIHM) between the period 1956-1980 and 1981-2005. Retrospective analyses show that small/humid (large/arid) basins show increased (reduced) uncertainty in projecting the changes in hydrologic attributes. Further, changes in error due to GCMs primarily account for the unexplained changes in mean and variability of seasonal streamflow. On the other hand, the changes in error due to temporal disaggregation and hydrologic model account for the inability to explain the observed changes in mean and variability of seasonal extremes. Thus, the proposed metrics provide insights on how the error in explaining the observed changes being propagated through the model under different hydroclimatic regimes. To understand interaction between surface water and groundwater resources, transient pumping impacts on streamflow and groundwater level were analyzed by imposing shortterm pumping scenarios under historic drought conditions. Since surface water and groundwater systems are fully coupled and integrated systems, increased groundwater withdrawal during drought may reduce baseflow into the stream and prolong both systems' recovery from drought. Towards this, we proposed an uncertainty framework to understand the resiliency of groundwater and surface water systems using a fully-coupled hydrologic model under transient pumping. Using this framework, we quantified the restoration time of surface water and groundwater systems and also estimated the changes in the state variables after pumping. Groundwater pumping impacts over the watershed were also analyzed under different pumping volumes and different potential climate scenarios. Our analyses show that groundwater restoration time is more sensitive to changes in pumping volumes as opposed to changes in climate. After the cessation of pumping, streamflow recovers quickly in comparison to groundwater. Pumping impacts on other state variables are also discussed. Given that surface water and groundwater are inter-connected, optimal management of the both resources should be considered to improve the watershed resiliency under drought. Subsequently, conjunctive use of surface water and groundwater has been considered as an effective approach to mitigate water shortage problems that are primarily caused by a drought. It is found that appropriate use of groundwater withdrawal was able to reduce water scarcity in surface water resources in drought condition. Besides, recovery time constraint was embedded in the management model so that trade-off between minimizing water scarcity and maximizing sustainability on groundwater was successfully addressed.

  16. Fringe Capacitance Correction for a Coaxial Soil Cell

    PubMed Central

    Pelletier, Mathew G.; Viera, Joseph A.; Schwartz, Robert C.; Lascano, Robert J.; Evett, Steven R.; Green, Tim R.; Wanjura, John D.; Holt, Greg A.

    2011-01-01

    Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR) as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the fringe capacitance which is then used to correct the experimental results such that experimental measurements utilizing differing coaxial cell diameters and probe lengths, upon correction with the Poisson model derived correction factor, all produce the same results thereby lending support and for an augmented measurement technique for measurement of absolute permittivity. PMID:22346601

  17. Errors in Measuring Water Potentials of Small Samples Resulting from Water Adsorption by Thermocouple Psychrometer Chambers 1

    PubMed Central

    Bennett, Jerry M.; Cortes, Peter M.

    1985-01-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367

  18. Errors in measuring water potentials of small samples resulting from water adsorption by thermocouple psychrometer chambers.

    PubMed

    Bennett, J M; Cortes, P M

    1985-09-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.

  19. Efficient Approaches for Propagating Hydrologic Forcing Uncertainty: High-Resolution Applications Over the Western United States

    NASA Astrophysics Data System (ADS)

    Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.

    2017-12-01

    NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.

  20. The estimation of soil water fluxes using lysimeter data

    NASA Astrophysics Data System (ADS)

    Wegehenkel, M.

    2009-04-01

    The validation of soil water balance models regarding soil water fluxes in the field is still a problem. This requires time series of measured model outputs. In our study, a soil water balance model was validated using lysimeter time series of measured model outputs. The soil water balance model used in our study was the Hydrus-1D-model. This model was tested by a comparison of simulated with measured daily rates of actual evapotranspiration, soil water storage, groundwater recharge and capillary rise. These rates were obtained from twelve weighable lysimeters with three different soils and two different lower boundary conditions for the time period from January 1, 1996 to December 31, 1998. In that period, grass vegetation was grown on all lysimeters. These lysimeters are located in Berlin, Germany. One potential source of error in lysimeter experiments is preferential flow caused by an artificial channeling of water due to the occurrence of air space between the soil monolith and the inside wall of the lysimeters. To analyse such sources of errors, Hydrus-1D was applied with different modelling procedures. The first procedure consists of a general uncalibrated appli-cation of Hydrus-1D. The second one includes a calibration of soil hydraulic parameters via inverse modelling of different percolation events with Hydrus-1D. In the third procedure, the model DUALP_1D was applied with the optimized hydraulic parameter set to test the hy-pothesis of the existence of preferential flow paths in the lysimeters. The results of the different modelling procedures indicated that, in addition to a precise determination of the soil water retention functions, vegetation parameters such as rooting depth should also be taken into account. Without such information, the rooting depth is a calibration parameter. However, in some cases, the uncalibrated application of both models also led to an acceptable fit between measured and simulated model outputs.

  1. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  2. Representing radar rainfall uncertainty with ensembles based on a time-variant geostatistical error modelling approach

    NASA Astrophysics Data System (ADS)

    Cecinati, Francesca; Rico-Ramirez, Miguel Angel; Heuvelink, Gerard B. M.; Han, Dawei

    2017-05-01

    The application of radar quantitative precipitation estimation (QPE) to hydrology and water quality models can be preferred to interpolated rainfall point measurements because of the wide coverage that radars can provide, together with a good spatio-temporal resolutions. Nonetheless, it is often limited by the proneness of radar QPE to a multitude of errors. Although radar errors have been widely studied and techniques have been developed to correct most of them, residual errors are still intrinsic in radar QPE. An estimation of uncertainty of radar QPE and an assessment of uncertainty propagation in modelling applications is important to quantify the relative importance of the uncertainty associated to radar rainfall input in the overall modelling uncertainty. A suitable tool for this purpose is the generation of radar rainfall ensembles. An ensemble is the representation of the rainfall field and its uncertainty through a collection of possible alternative rainfall fields, produced according to the observed errors, their spatial characteristics, and their probability distribution. The errors are derived from a comparison between radar QPE and ground point measurements. The novelty of the proposed ensemble generator is that it is based on a geostatistical approach that assures a fast and robust generation of synthetic error fields, based on the time-variant characteristics of errors. The method is developed to meet the requirement of operational applications to large datasets. The method is applied to a case study in Northern England, using the UK Met Office NIMROD radar composites at 1 km resolution and at 1 h accumulation on an area of 180 km by 180 km. The errors are estimated using a network of 199 tipping bucket rain gauges from the Environment Agency. 183 of the rain gauges are used for the error modelling, while 16 are kept apart for validation. The validation is done by comparing the radar rainfall ensemble with the values recorded by the validation rain gauges. The validated ensemble is then tested on a hydrological case study, to show the advantage of probabilistic rainfall for uncertainty propagation. The ensemble spread only partially captures the mismatch between the modelled and the observed flow. The residual uncertainty can be attributed to other sources of uncertainty, in particular to model structural uncertainty, parameter identification uncertainty, uncertainty in other inputs, and uncertainty in the observed flow.

  3. Apoplastic water fraction and rehydration techniques introduce significant errors in measurements of relative water content and osmotic potential in plant leaves.

    PubMed

    Arndt, Stefan K; Irawan, Andi; Sanders, Gregor J

    2015-12-01

    Relative water content (RWC) and the osmotic potential (π) of plant leaves are important plant traits that can be used to assess drought tolerance or adaptation of plants. We estimated the magnitude of errors that are introduced by dilution of π from apoplastic water in osmometry methods and the errors that occur during rehydration of leaves for RWC and π in 14 different plant species from trees, grasses and herbs. Our data indicate that rehydration technique and length of rehydration can introduce significant errors in both RWC and π. Leaves from all species were fully turgid after 1-3 h of rehydration and increasing the rehydration time resulted in a significant underprediction of RWC. Standing rehydration via the petiole introduced the least errors while rehydration via floating disks and submerging leaves for rehydration led to a greater underprediction of RWC. The same effect was also observed for π. The π values following standing rehydration could be corrected by applying a dilution factor from apoplastic water dilution using an osmometric method but not by using apoplastic water fraction (AWF) from pressure volume (PV) curves. The apoplastic water dilution error was between 5 and 18%, while the two other rehydration methods introduced much greater errors. We recommend the use of the standing rehydration method because (1) the correct rehydration time can be evaluated by measuring water potential, (2) overhydration effects were smallest, and (3) π can be accurately corrected by using osmometric methods to estimate apoplastic water dilution. © 2015 Scandinavian Plant Physiology Society.

  4. Toward the Application of the Implicit Particle Filter to Real Data in a Shallow Water Model of the Nearshore Ocean

    NASA Astrophysics Data System (ADS)

    Miller, R.

    2015-12-01

    Following the success of the implicit particle filter in twin experiments with a shallow water model of the nearshore environment, the planned next step is application to the intensive Sandy Duck data set, gathered at Duck, NC. Adaptation of the present system to the Sandy Duck data set will require construction and evaluation of error models for both the model and the data, as well as significant modification of the system to allow for the properties of the data set. Successful implementation of the particle filter promises to shed light on the details of the capabilities and limitations of shallow water models of the nearshore ocean relative to more detailed models. Since the shallow water model admits distinct dynamical regimes, reliable parameter estimation will be important. Previous work by other groups give cause for optimism. In this talk I will describe my progress toward implementation of the new system, including problems solved, pitfalls remaining and preliminary results

  5. Systematic evaluation of NASA precipitation radar estimates using NOAA/NSSL National Mosaic QPE products

    NASA Astrophysics Data System (ADS)

    Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.

    2011-12-01

    Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.

  6. Applications systems verification and transfer project. Volume 1: Operational applications of satellite snow cover observations: Executive summary. [usefulness of satellite snow-cover data for water yield prediction

    NASA Technical Reports Server (NTRS)

    Rango, A.

    1981-01-01

    Both LANDSAT and NOAA satellite data were used in improving snowmelt runoff forecasts. When the satellite snow cover data were tested in both empirical seasonal runoff estimation and short term modeling approaches, a definite potential for reducing forecast error was evident. A cost benefit analysis run in conjunction with the snow mapping indicated a $36.5 million annual benefit accruing from a one percent improvement in forecast accuracy using the snow cover data for the western United States. The annual cost of employing the system would be $505,000. The snow mapping has proven that satellite snow cover data can be used to reduce snowmelt runoff forecast error in a cost effective manner once all operational satellite data are available within 72 hours after acquisition. Executive summaries of the individual snow mapping projects are presented.

  7. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  8. Data assimilation with soil water content sensors and pedotransfer functions in soil water flow modeling

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on a set of simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Soil water content monitoring data can be used to reduce the errors in models. Data assimilation (...

  9. Effects of vertical distribution of water vapor and temperature on total column water vapor retrieval error

    NASA Technical Reports Server (NTRS)

    Sun, Jielun

    1993-01-01

    Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.

  10. Virtual design and construction of plumbing systems

    NASA Astrophysics Data System (ADS)

    Filho, João Bosco P. Dantas; Angelim, Bruno Maciel; Guedes, Joana Pimentel; de Castro, Marcelo Augusto Farias; Neto, José de Paula Barros

    2016-12-01

    Traditionally, the design coordination process is carried out by overlaying and comparing 2D drawings made by different project participants. Detecting information errors from a composite drawing is especially challenging and error prone. This procedure usually leaves many design errors undetected until construction begins, and typically lead to rework. Correcting conflict issues, which were not identified during design and coordination phase, reduces the overall productivity for everyone involved in the construction process. The identification of construction issues in the field generate Request for Information (RFIs) that is one of delays causes. The application of Virtual Design and Construction (VDC) tools to the coordination processes can bring significant value to architecture, structure, and mechanical, electrical, and plumbing (MEP) designs in terms of a reduced number of errors undetected and requests for information. This paper is focused on evaluating requests for information (RFI) associated with water/sanitary facilities of a BIM model. Thus, it is expected to add improvements of water/sanitary facility designs, as well as to assist the virtual construction team to notice and identify design problems. This is an exploratory and descriptive research. A qualitative methodology is used. This study adopts RFI's classification in six analyzed categories: correction, omission, validation of information, modification, divergence of information and verification. The results demonstrate VDC's contribution improving the plumbing system designs. Recommendations are suggested to identify and avoid these RFI types in plumbing system design process or during virtual construction.

  11. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  12. Analyzing the errors of DFT approximations for compressed water systems

    NASA Astrophysics Data System (ADS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.

  13. Application of microwave radiometry to improving climate data records.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liljegren, J. C.; Cadeddu, M. P.; Decision and Information Sciences

    2007-01-01

    Microwave radiometers deployed by the U. S. Department of Energy's Atmospheric Radiation Measurement (ARM) Program provide crucial data for a wide range of research applications. The accuracy and stability of these instruments also makes them ideal for improving climate data records: to detect and correct discontinuities in the long-term climate records, to validate and calibrate the climate data, to characterize errors in the climate records, and to plan for the future Global Climate Observing System (GCOS) Reference Upper-Air network. This paper presents an overview of these capabilities with examples from ARM data. Two-channel microwave radiometers (MWR) operating at 23.8 andmore » 31.4 GHz are deployed at each of eleven ARM Climate Research Facility (ACRF) field sites in the U.S. Southern Great Plains (SGP), Tropical Western Pacific (TWP), North Slope of Alaska (NSA), and with the ARM Mobile Facility in Niamey, Niger for the purpose of retrieving precipitable water vapor (PWV) and liquid water path (LWP). At these locations PWV ranges from as low as 1 mm (1 kg/m{sup 2}) at the NSA to 70 mm or more in the TWP; LWP can exceed 2 mm at many sites. The MWR accommodates this wide dynamic range for all non-precipitating conditions with a root-mean-square error of about 0.4 mm for PWV and 0.02 mm (20 g/m{sup 2}) for LWP. The calibration of the MWR is continuously and autonomously monitored and updated to maintain accuracy. Comparisons of collocated MWRs will be presented. Site-specific linear statistical retrievals are used operationally; more sophisticated retrievals are applied in post-processing the data. Because PWV is an integral measure, derived from both the relative humidity and temperature profiles of the radiosonde, it is a particularly useful reference quantity. Comparison of PWV measured by the MWR with PWV from radiosondes reveals dry biases and diurnal trends as well as general calibration variability in the radiosondes. To correct the bias and reduce the variability ARM scales the relative humidity measurements from the radiosondes to produce agreement with the PWV measured by the MWR. Comparisons of infrared spectral radiances calculated using these scaled radiosondes with high spectral resolution measurements exhibit dramatically reduced bias and variability. This ability to detect and correct errors in the radiosondes measurements will be critical for detecting climate change. The MWR has also been used for a variety of ground- and satellite-based remote sensor retrieval development and validation studies, including precipitable water vapor and slant water vapor retrievals using the Global Positioning System (GPS). The MWR can provide a valuable comparison for GPS-derived zenith wet delay and PWV values, e.g., for evaluating improved mapping functions and detecting errors due, for example, to multi-path contributions. For precipitable water vapor amounts less than 4 mm, which commonly occur in cold, dry Arctic conditions, the 0.4 mm root-mean-square error of the MWR precipitable water vapor measurement is problematic. To obtain increased sensitivity under these conditions, a new G-band water vapor radiometer (GVR) operating at 183.31 {+-} 1, {+-}3, {+-}7, and {+-}14 GHz is deployed at the NSA Barrow site. The GVR offers a valuable reference for radiosonde and GPS water vapor measurements at Arctic locations that are expected to be particularly sensitive to climate change.« less

  14. Application of a Line Laser Scanner for Bed Form Tracking in a Laboratory Flume

    NASA Astrophysics Data System (ADS)

    de Ruijsscher, T. V.; Hoitink, A. J. F.; Dinnissen, S.; Vermeulen, B.; Hazenberg, P.

    2018-03-01

    A new measurement method for continuous detection of bed forms in movable bed laboratory experiments is presented and tested. The device consists of a line laser coupled to a 3-D camera, which makes use of triangulation. This allows to measure bed forms during morphodynamic experiments, without removing the water from the flume. A correction is applied for the effect of laser refraction at the air-water interface. We conclude that the absolute measurement error increases with increasing flow velocity, its standard deviation increases with water depth and flow velocity, and the percentage of missing values increases with water depth. Although 71% of the data is lost in a pilot moving bed experiment with sand, still high agreement between flowing water and dry bed measurements is found when a robust LOcally weighted regrESSion (LOESS) procedure is applied. This is promising for bed form tracking applications in laboratory experiments, especially when lightweight sediments like polystyrene are used, which require smaller flow velocities to achieve dynamic similarity to the prototype. This is confirmed in a moving bed experiment with polystyrene.

  15. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  16. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  17. The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier

    DTIC Science & Technology

    2013-02-14

    produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater

  18. LANDSAT/coastal processes

    NASA Technical Reports Server (NTRS)

    James, W. P. (Principal Investigator); Hill, J. M.; Bright, J. B.

    1977-01-01

    The author has identified the following significant results. Correlations between the satellite radiance values water color, Secchi disk visibility, turbidity, and attenuation coefficients were generally good. The residual was due to several factors including systematic errors in the remotely sensed data, errors, small time and space variations in the water quality measurements, and errors caused by experimental design. Satellite radiance values were closely correlated with the optical properties of the water.

  19. Latin hypercube approach to estimate uncertainty in ground water vulnerability

    USGS Publications Warehouse

    Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.

    2007-01-01

    A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.

  20. Towards First Principles-Based Prediction of Highly Accurate Electrochemical Pourbaix Diagrams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Zhenhua; Chan, Maria K. Y.; Zhao, Zhi-Jian

    2015-08-13

    Electrochemical potential/pH (Pourbaix) diagrams underpin many aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such diagrams, inherent errors in the description of transition metal (hydroxy)oxides, together with neglect of van der Waals interactions, have limited the reliability of such predictions for even the simplest pure metal bulk compounds, and corresponding predictions for more complex alloy or surface structures are even more challenging. In the present work, through synergistic use of a Hubbard U correction,more » a state-of-the-art dispersion correction, and a water-based bulk reference state for the calculations, these errors are systematically corrected. The approach describes the weak binding that occurs between hydroxyl-containing functional groups in certain compounds in Pourbaix diagrams, corrects for self-interaction errors in transition metal compounds, and reduces residual errors on oxygen atoms by preserving a consistent oxidation state between the reference state, water, and the relevant bulk phases. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxides, oxyhydroxides, binary, and ternary oxides, where the corresponding thermodynamics of redox and (de)hydration are described with standard errors of 0.04 eV per (reaction) formula unit. The approach further preserves accurate descriptions of the overall thermodynamics of electrochemically-relevant bulk reactions, such as water formation, which is an essential condition for facilitating accurate analysis of reaction energies for electrochemical processes on surfaces. The overall generality and transferability of the scheme suggests that it may find useful application in the construction of a broad array of electrochemical phase diagrams, including both bulk Pourbaix diagrams and surface phase diagrams of interest for corrosion and electrocatalysis.« less

  1. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  2. Application of simple adaptive control to water hydraulic servo cylinder system

    NASA Astrophysics Data System (ADS)

    Ito, Kazuhisa; Yamada, Tsuyoshi; Ikeo, Shigeru; Takahashi, Koji

    2012-09-01

    Although conventional model reference adaptive control (MRAC) achieves good tracking performance for cylinder control, the controller structure is much more complicated and has less robustness to disturbance in real applications. This paper discusses the use of simple adaptive control (SAC) for positioning a water hydraulic servo cylinder system. Compared with MRAC, SAC has a simpler and lower order structure, i.e., higher feasibility. The control performance of SAC is examined and evaluated on a water hydraulic servo cylinder system. With the recent increased concerns over global environmental problems, the water hydraulic technique using pure tap water as a pressure medium has become a new drive source comparable to electric, oil hydraulic, and pneumatic drive systems. This technique is also preferred because of its high power density, high safety against fire hazards in production plants, and easy availability. However, the main problems for precise control in a water hydraulic system are steady state errors and overshoot due to its large friction torque and considerable leakage flow. MRAC has been already applied to compensate for these effects, and better control performances have been obtained. However, there have been no reports on the application of SAC for water hydraulics. To make clear the merits of SAC, the tracking control performance and robustness are discussed based on experimental results. SAC is confirmed to give better tracking performance compared with PI control, and a control precision comparable to MRAC (within 10 μm of the reference position) and higher robustness to parameter change, despite the simple controller. The research results ensure a wider application of simple adaptive control in real mechanical systems.

  3. Regionalization of harmonic-mean streamflows in Kentucky

    USGS Publications Warehouse

    Martin, Gary R.; Ruhl, Kevin J.

    1993-01-01

    Harmonic-mean streamflow (Qh), defined as the reciprocal of the arithmetic mean of the reciprocal daily streamflow values, was determined for selected stream sites in Kentucky. Daily mean discharges for the available period of record through the 1989 water year at 230 continuous record streamflow-gaging stations located in and adjacent to Kentucky were used in the analysis. Periods of record affected by regulation were identified and analyzed separately from periods of record unaffected by regulation. Record-extension procedures were applied to short-term stations to reducetime-sampling error and, thus, improve estimates of the long-term Qh. Techniques to estimate the Qh at ungaged stream sites in Kentucky were developed. A regression model relating Qh to total drainage area and streamflow-variability index was presented with example applications. The regression model has a standard error of estimate of 76 percent and a standard error of prediction of 78 percent.

  4. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.

  5. Satellite Altimetry And Radiometry for Inland Hydrology, Coastal Sea-Level And Environmental Studies

    NASA Astrophysics Data System (ADS)

    Tseng, Kuo-Hsin

    In this study, we demonstrate three environmental-related applications employing altimetry and remote sensing satellites, and exemplify the prospective usage underlying the current progressivity in mechanical and data analyzing technologies. Our discussion starts from the improved waveform retracking techniques in need for altimetry measurements over coastal and inland water regions. We developed two novel auxiliary procedures, namely the Subwaveform Filtering (SF) method and the Track Offset Correction (TOC), for waveform retracking algorithms to operationally detect altimetry waveform anomalies and further reduce possible errors in determination of the track offset. After that, we present two demonstrative studies related to the ionospheric and tropospheric compositions, respectively, as their variations are the important error sources for satellite electromagnetic signals. We firstly compare the total electron content (TEC) measured by multiple altimetry and GNSS sensors. We conclude that the ionosphere delay measured by Jason-2 is about 6-10 mm shorter than the GPS models. On the other hand, we use several atmospheric variables to study the climate change over high elevation areas. Five types of satellite data and reanalysis models were used to study climate change indicators. We conclude that the spatial distribution of temperature trend among data products is quite different, which is probably due to the choice of various time spans. Following discussions about the measuring techniques and relative bias between data products, we applied our improved altimetry techniques to three environmental science applications with helps of remote sensing imagery. We first manifest the detectability of hydrological events by satellite altimetry and radiometry. The characterization of one-dimensional (along-track) water boundary using former Backscattering Coefficient (BC) method is assisted by the two-dimensional (horizontal) estimate of water extent using the Moderate Resolution Imaging Spectroradiometer (MODIS). Meanwhile, the computation of surface level variation at identified water region matches well with nearby in situ gauge data. In second application we use synergistic altimetry and remote sensing data to characterize water bodies’ variation in Poyang Lake, China. The accurate measurement of lake surface variation dominates the potential extent of habitat for certain Oncomelania snails to reside, while this species of snail transmits certain infectious disease (ID) Schistosomiasis. We use Envisat and various MODIS products to obtain former revealed geophysical factors that affect snail’s ecosystem, to simulate favorable residence for the intermediating snail. The simulation of habitat extent matches fairly well with the official estimation reported from China’s health administration during 2002-2007. The third application, with multispectral reflectance data given by the MERIS onboard Envisat, is to demonstrate the potential capability in quantifying the harmful algal bloom (HAB) alongshore Lake Erie, in regard to certain cyanobacteria (i.e. Microcystis) that endanger water qualities. After comparing with the in situ laboratory analysis, a correlation coefficient, standard deviation of error (SDE), and a bias are obtained in between. Finally, we display the lake level variation using Envisat altimetry and MODIS water temperature product, to demonstrate the possibility in modelling the onset timing of toxic cyanobacteria bloom using spaceborne observations.

  6. Assessing Variability and Errors in Historical Runoff Forecasting with Physical Models and Alternative Data Sources

    NASA Astrophysics Data System (ADS)

    Penn, C. A.; Clow, D. W.; Sexstone, G. A.

    2017-12-01

    Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a spatially-distributed physics-based snow model was used to assess possible effects of land cover change on snowpack properties. Trends in forecasted error are variable while baseline model results show a consistent under-prediction in the recent decade, highlighting possible compounding effects of climate and land cover changes.

  7. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells.

    PubMed

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-10-14

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  8. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells

    PubMed Central

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-01-01

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412

  9. Rapid Response Flood Water Mapping

    NASA Technical Reports Server (NTRS)

    Policelli, Fritz; Brakenridge, G. R.; Coplin, A.; Bunnell, M.; Wu, L.; Habib, Shahid; Farah, H.

    2010-01-01

    Since the beginning of operation of the MODIS instrument on the NASA Terra satellite at the end of 1999, an exceptionally useful sensor and public data stream have been available for many applications including the rapid and precise characterization of terrestrial surface water changes. One practical application of such capability is the near-real time mapping of river flood inundation. We have developed a surface water mapping methodology based on using only bands 1 (620-672 nm) and 2 (841-890 nm). These are the two bands at 250 m, and the use of only these bands maximizes the resulting map detail. In this regard, most water bodies are strong absorbers of incoming solar radiation at the band 2 wavelength: it could be used alone, via a thresholding procedure, to separate water (dark, low radiance or reflectance pixels) from land (much brighter pixels) (1, 2). Some previous water mapping procedures have in fact used such single band data from this and other sensors that include similar wavelength channels. Adding the second channel of data (band 1), however, allows a band ratio approach which permits sediment-laden water, often relatively light at band 2 wavelengths, to still be discriminated, and, as well, provides some removal of error by reducing the number of cloud shadow pixels that would otherwise be misclassified as water.

  10. First-principles energetics of water clusters and ice: A many-body analysis

    NASA Astrophysics Data System (ADS)

    Gillan, M. J.; Alfè, D.; Bartók, A. P.; Csányi, G.

    2013-12-01

    Standard forms of density-functional theory (DFT) have good predictive power for many materials, but are not yet fully satisfactory for cluster, solid, and liquid forms of water. Recent work has stressed the importance of DFT errors in describing dispersion, but we note that errors in other parts of the energy may also contribute. We obtain information about the nature of DFT errors by using a many-body separation of the total energy into its 1-body, 2-body, and beyond-2-body components to analyze the deficiencies of the popular PBE and BLYP approximations for the energetics of water clusters and ice structures. The errors of these approximations are computed by using accurate benchmark energies from the coupled-cluster technique of molecular quantum chemistry and from quantum Monte Carlo calculations. The systems studied are isomers of the water hexamer cluster, the crystal structures Ih, II, XV, and VIII of ice, and two clusters extracted from ice VIII. For the binding energies of these systems, we use the machine-learning technique of Gaussian Approximation Potentials to correct successively for 1-body and 2-body errors of the DFT approximations. We find that even after correction for these errors, substantial beyond-2-body errors remain. The characteristics of the 2-body and beyond-2-body errors of PBE are completely different from those of BLYP, but the errors of both approximations disfavor the close approach of non-hydrogen-bonded monomers. We note the possible relevance of our findings to the understanding of liquid water.

  11. Maine StreamStats: a water-resources web application

    USGS Publications Warehouse

    Lombard, Pamela J.

    2015-01-01

    Reports referenced in this fact sheet present the regression equations used to estimate the flow statistics, describe the errors associated with the estimates, and describe the methods used to develop the equations and to measure the basin characteristics used in the equations. Limitations of the methods are also described in the reports; for example, all of the equations are appropriate only for ungaged, unregulated, rural streams in Maine.

  12. Application of Terrestrial Microwave Remote Sensing to Agricultural Drought Monitoring

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Bolten, J. D.

    2014-12-01

    Root-zone soil moisture information is a valuable diagnostic for detecting the onset and severity of agricultural drought. Current attempts to globally monitor root-zone soil moisture are generally based on the application of soil water balance models driven by observed meteorological variables. Such systems, however, are prone to random error associated with: incorrect process model physics, poor parameter choices and noisy meteorological inputs. The presentation will describe attempts to remediate these sources of error via the assimilation of remotely-sensed surface soil moisture retrievals from satellite-based passive microwave sensors into a global soil water balance model. Results demonstrate the ability of satellite-based soil moisture retrieval products to significantly improve the global characterization of root-zone soil moisture - particularly in data-poor regions lacking adequate ground-based rain gage instrumentation. This success has lead to an on-going effort to implement an operational land data assimilation system at the United States Department of Agriculture's Foreign Agricultural Service (USDA FAS) to globally monitor variations in root-zone soil moisture availability via the integration of satellite-based precipitation and soil moisture information. Prospects for improving the performance of the USDA FAS system via the simultaneous assimilation of both passive and active-based soil moisture retrievals derived from the upcoming NASA Soil Moisture Active/Passive mission will also be discussed.

  13. Application of remote sensing in the study of vegetation and soils in Idaho

    NASA Technical Reports Server (NTRS)

    Tisdale, E. W. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Comparison of ERTS-1 imagery and USGS 1:250,000 scale maps of study areas with known ground points revealed significant map errors. These errors were sufficient to render impractical the projection of ERTS-1 imagery directly onto maps of the area. Marked differences were found in the delineation of ground features by different MSS bands. Generally, Band 4 was least useful, while Band 5 proved valuable for indicating patterns of native vegetation, cultivated areas - both dry and irrigated, lava fields, drainage basins, and deep bodies of water. Band 6 was better for landforms and drainages and for shallow bodies of water than Band 5 but inferior for indicating patterns in native vegetation and most types of cultivated land. Band 7 was best of all for indicating lava flows, water bodies, and landform features. Use of a additive color viewer-projector aided greatly in separation of images. A combination of Bands 5 and 7 with appropriate color filters proved best for separating most types of native vegetation and cultivated crops. Landform features and water bodies also showed well with this combination. The addition of Band 4 imagery to these further enhanced the identification of semi-dormant vegetation.

  14. Simulating water and nitrogen loss from an irrigated paddy field under continuously flooded condition with Hydrus-1D model.

    PubMed

    Yang, Rui; Tong, Juxiu; Hu, Bill X; Li, Jiayun; Wei, Wenshuo

    2017-06-01

    Agricultural non-point source pollution is a major factor in surface water and groundwater pollution, especially for nitrogen (N) pollution. In this paper, an experiment was conducted in a direct-seeded paddy field under traditional continuously flooded irrigation (CFI). The water movement and N transport and transformation were simulated via the Hydrus-1D model, and the model was calibrated using field measurements. The model had a total water balance error of 0.236 cm and a relative error (error/input total water) of 0.23%. For the solute transport model, the N balance error and relative error (error/input total N) were 0.36 kg ha -1 and 0.40%, respectively. The study results indicate that the plow pan plays a crucial role in vertical water movement in paddy fields. Water flow was mainly lost through surface runoff and underground drainage, with proportions to total input water of 32.33 and 42.58%, respectively. The water productivity in the study was 0.36 kg m -3 . The simulated N concentration results revealed that ammonia was the main form in rice uptake (95% of total N uptake), and its concentration was much larger than for nitrate under CFI. Denitrification and volatilization were the main losses, with proportions to total consumption of 23.18 and 14.49%, respectively. Leaching (10.28%) and surface runoff loss (2.05%) were the main losses of N pushed out of the system by water. Hydrus-1D simulation was an effective method to predict water flow and N concentrations in the three different forms. The study provides results that could be used to guide water and fertilization management and field results for numerical studies of water flow and N transport and transformation in the future.

  15. Systematic Error in Leaf Water Potential Measurements with a Thermocouple Psychrometer.

    PubMed

    Rawlins, S L

    1964-10-30

    To allow for the error in measurement of water potentials in leaves, introduced by the presence of a water droplet in the chamber of the psychrometer, a correction must be made for the permeability of the leaf.

  16. Stochastic estimation of plant-available soil water under fluctuating water table depths

    NASA Astrophysics Data System (ADS)

    Or, Dani; Groeneveld, David P.

    1994-12-01

    Preservation of native valley-floor phreatophytes while pumping groundwater for export from Owens Valley, California, requires reliable predictions of plant water use. These predictions are compared with stored soil water within well field regions and serve as a basis for managing groundwater resources. Soil water measurement errors, variable recharge, unpredictable climatic conditions affecting plant water use, and modeling errors make soil water predictions uncertain and error-prone. We developed and tested a scheme based on soil water balance coupled with implementation of Kalman filtering (KF) for (1) providing physically based soil water storage predictions with prediction errors projected from the statistics of the various inputs, and (2) reducing the overall uncertainty in both estimates and predictions. The proposed KF-based scheme was tested using experimental data collected at a location on the Owens Valley floor where the water table was artificially lowered by groundwater pumping and later allowed to recover. Vegetation composition and per cent cover, climatic data, and soil water information were collected and used for developing a soil water balance. Predictions and updates of soil water storage under different types of vegetation were obtained for a period of 5 years. The main results show that: (1) the proposed predictive model provides reliable and resilient soil water estimates under a wide range of external conditions; (2) the predicted soil water storage and the error bounds provided by the model offer a realistic and rational basis for decisions such as when to curtail well field operation to ensure plant survival. The predictive model offers a practical means for accommodating simple aspects of spatial variability by considering the additional source of uncertainty as part of modeling or measurement uncertainty.

  17. Modeling of temperature-induced near-infrared and low-field time-domain nuclear magnetic resonance spectral variation: chemometric prediction of limonene and water content in spray-dried delivery systems.

    PubMed

    Andrade, Letícia; Farhat, Imad A; Aeberhardt, Kasia; Bro, Rasmus; Engelsen, Søren Balling

    2009-02-01

    The influence of temperature on near-infrared (NIR) and nuclear magnetic resonance (NMR) spectroscopy complicates the industrial applications of both spectroscopic methods. The focus of this study is to analyze and model the effect of temperature variation on NIR spectra and NMR relaxation data. Different multivariate methods were tested for constructing robust prediction models based on NIR and NMR data acquired at various temperatures. Data were acquired on model spray-dried limonene systems at five temperatures in the range from 20 degrees C to 60 degrees C and partial least squares (PLS) regression models were computed for limonene and water predictions. The predictive ability of the models computed on the NIR spectra (acquired at various temperatures) improved significantly when data were preprocessed using extended inverted signal correction (EISC). The average PLS regression prediction error was reduced to 0.2%, corresponding to 1.9% and 3.4% of the full range of limonene and water reference values, respectively. The removal of variation induced by temperature prior to calibration, by direct orthogonalization (DO), slightly enhanced the predictive ability of the models based on NMR data. Bilinear PLS models, with implicit inclusion of the temperature, enabled limonene and water predictions by NMR with an error of 0.3% (corresponding to 2.8% and 7.0% of the full range of limonene and water). For NMR, and in contrast to the NIR results, modeling the data using multi-way N-PLS improved the models' performance. N-PLS models, in which temperature was included as an extra variable, enabled more accurate prediction, especially for limonene (prediction error was reduced to 0.2%). Overall, this study proved that it is possible to develop models for limonene and water content prediction based on NIR and NMR data, independent of the measurement temperature.

  18. A method to estimate groundwater depletion from confining layers

    USGS Publications Warehouse

    Konikow, Leonard F.; Neuzil, Christopher E.

    2007-01-01

    Although depletion of storage in low‐permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.

  19. [Near infrared spectroscopy study on water content in turbine oil].

    PubMed

    Chen, Bin; Liu, Ge; Zhang, Xian-Ming

    2013-11-01

    Near infrared (NIR) spectroscopy combined with successive projections algorithm (SPA) was investigated for determination of water content in turbine oil. Through the 57 samples of different water content in turbine oil scanned applying near infrared (NIR) spectroscopy, with the water content in the turbine oil of 0-0.156%, different pretreatment methods such as the original spectra, first derivative spectra and differential polynomial least squares fitting algorithm Savitzky-Golay (SG), and successive projections algorithm (SPA) were applied for the extraction of effective wavelengths, the correlation coefficient (R) and root mean square error (RMSE) were used as the model evaluation indices, accordingly water content in turbine oil was investigated. The results indicated that the original spectra with different water content in turbine oil were pretreated by the performance of first derivative + SG pretreatments, then the selected effective wavelengths were used as the inputs of least square support vector machine (LS-SVM). A total of 16 variables selected by SPA were employed to construct the model of SPA and least square support vector machine (SPA-LS-SVM). There is 9 as The correlation coefficient was 0.975 9 and the root of mean square error of validation set was 2.655 8 x 10(-3) using the model, and it is feasible to determine the water content in oil using near infrared spectroscopy and SPA-LS-SVM, and an excellent prediction precision was obtained. This study supplied a new and alternative approach to the further application of near infrared spectroscopy in on-line monitoring of contamination such as water content in oil.

  20. The effect of tropospheric fluctuations on the accuracy of water vapor radiometry

    NASA Technical Reports Server (NTRS)

    Wilcox, J. Z.

    1992-01-01

    Line-of-sight path delay calibration accuracies of 1 mm are needed to improve both angular and Doppler tracking capabilities. Fluctuations in the refractivity of tropospheric water vapor limit the present accuracies to about 1 nrad for the angular position and to a delay rate of 3x10(exp -13) sec/sec over a 100-sec time interval for Doppler tracking. This article describes progress in evaluating the limitations of the technique of water vapor radiometry at the 1-mm level. The two effects evaluated here are: (1) errors arising from tip-curve calibration of WVR's in the presence of tropospheric fluctuations and (2) errors due to the use of nonzero beamwidths for water vapor radiometer (WVR) horns. The error caused by tropospheric water vapor fluctuations during instrument calibration from a single tip curve is 0.26 percent in the estimated gain for a tip-curve duration of several minutes or less. This gain error causes a 3-mm bias and a 1-mm scale factor error in the estimated path delay at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor column density present in the troposphere during the astrometric observation. The error caused by WVR beam averaging of tropospheric fluctuations is 3 mm at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor (and is proportionally higher for higher water vapor content) for current WVR beamwidths (full width at half maximum of approximately 6 deg). This is a stochastic error (which cannot be calibrated) and which can be reduced to about half of its instantaneous value by time averaging the radio signal over several minutes. The results presented here suggest two improvements to WVR design: first, the gain of the instruments should be stabilized to 4 parts in 10(exp 4) over a calibration period lasting 5 hours, and second, the WVR antenna beamwidth should be reduced to about 0.2 deg. This will reduce the error induced by water vapor fluctuations in the estimated path delays to less than 1 mm for the elevation range from zenith to 6 deg for most observation weather conditions.

  1. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  2. Automatic Coregistration Algorithm to Remove Canopy Shaded Pixels in UAV-Borne Thermal Images to Improve the Estimation of Crop Water Stress Index of a Drip-Irrigated Cabernet Sauvignon Vineyard.

    PubMed

    Poblete, Tomas; Ortega-Farías, Samuel; Ryu, Dongryeol

    2018-01-30

    Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index (CWSI) have been used to assess plant water stress on a vine-by-vine basis without considering the spatial variability. Unmanned Aerial Vehicle (UAV)-borne TIR images are used to assess the canopy temperature variability within vineyards that can be related to the vine water status. Nevertheless, when aerial TIR images are captured over canopy, internal shadow canopy pixels cannot be detected, leading to mixed information that negatively impacts the relationship between CWSI and SWP. This study proposes a methodology for automatic coregistration of thermal and multispectral images (ranging between 490 and 900 nm) obtained from a UAV to remove shadow canopy pixels using a modified scale invariant feature transformation (SIFT) computer vision algorithm and Kmeans++ clustering. Our results indicate that our proposed methodology improves the relationship between CWSI and SWP when shadow canopy pixels are removed from a drip-irrigated Cabernet Sauvignon vineyard. In particular, the coefficient of determination (R²) increased from 0.64 to 0.77. In addition, values of the root mean square error (RMSE) and standard error (SE) decreased from 0.2 to 0.1 MPa and 0.24 to 0.16 MPa, respectively. Finally, this study shows that the negative effect of shadow canopy pixels was higher in those vines with water stress compared with well-watered vines.

  3. A Graphical Method for Estimation of Barometric Efficiency from Continuous Data - Concepts and Application to a Site in the Piedmont, Air Force Plant 6, Marietta, Georgia

    USGS Publications Warehouse

    Gonthier, Gerard

    2007-01-01

    A graphical method that uses continuous water-level and barometric-pressure data was developed to estimate barometric efficiency. A plot of nearly continuous water level (on the y-axis), as a function of nearly continuous barometric pressure (on the x-axis), will plot as a line curved into a series of connected elliptical loops. Each loop represents a barometric-pressure fluctuation. The negative of the slope of the major axis of an elliptical loop will be the ratio of water-level change to barometric-pressure change, which is the sum of the barometric efficiency plus the error. The negative of the slope of the preferred orientation of many elliptical loops is an estimate of the barometric efficiency. The slope of the preferred orientation of many elliptical loops is approximately the median of the slopes of the major axes of the elliptical loops. If water-level change that is not caused by barometric-pressure change does not correlate with barometric-pressure change, the probability that the error will be greater than zero will be the same as the probability that it will be less than zero. As a result, the negative of the median of the slopes for many loops will be close to the barometric efficiency. The graphical method provided a rapid assessment of whether a well was affected by barometric-pressure change and also provided a rapid estimate of barometric efficiency. The graphical method was used to assess which wells at Air Force Plant 6, Marietta, Georgia, had water levels affected by barometric-pressure changes during a 2003 constant-discharge aquifer test. The graphical method was also used to estimate barometric efficiency. Barometric-efficiency estimates from the graphical method were compared to those of four other methods: average of ratios, median of ratios, Clark, and slope. The two methods (the graphical and median-of-ratios methods) that used the median values of water-level change divided by barometric-pressure change appeared to be most resistant to error caused by barometric-pressure-independent water-level change. The graphical method was particularly resistant to large amounts of barometric-pressure-independent water-level change, having an average and standard deviation of error for control wells that was less than one-quarter that of the other four methods. When using the graphical method, it is advisable that more than one person select the slope or that the same person fits the same data several times to minimize the effect of subjectivity. Also, a long study period should be used (at least 60 days) to ensure that loops affected by large amounts of barometric-pressure-independent water-level change do not significantly contribute to error in the barometric-efficiency estimate.

  4. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  5. Mapping river bathymetry with a small footprint green LiDAR: Applications and challenges

    USGS Publications Warehouse

    Kinzel, Paul J.; Legleiter, Carl; Nelson, Jonathan M.

    2013-01-01

    that environmental conditions and postprocessing algorithms can influence the accuracy and utility of these surveys and must be given consideration. These factors can lead to mapping errors that can have a direct bearing on derivative analyses such as hydraulic modeling and habitat assessment. We discuss the water and substrate characteristics of the sites, compare the conventional and remotely sensed river-bed topographies, and investigate the laser waveforms reflected from submerged targets to provide an evaluation as to the suitability and accuracy of the EAARL system and associated processing algorithms for riverine mapping applications.

  6. Applicability of common stomatal conductance models in maize under varying soil moisture conditions.

    PubMed

    Wang, Qiuling; He, Qijin; Zhou, Guangsheng

    2018-07-01

    In the context of climate warming, the varying soil moisture caused by precipitation pattern change will affect the applicability of stomatal conductance models, thereby affecting the simulation accuracy of carbon-nitrogen-water cycles in ecosystems. We studied the applicability of four common stomatal conductance models including Jarvis, Ball-Woodrow-Berry (BWB), Ball-Berry-Leuning (BBL) and unified stomatal optimization (USO) models based on summer maize leaf gas exchange data from a soil moisture consecutive decrease manipulation experiment. The results showed that the USO model performed best, followed by the BBL model, BWB model, and the Jarvis model performed worst under varying soil moisture conditions. The effects of soil moisture made a difference in the relative performance among the models. By introducing a water response function, the performance of the Jarvis, BWB, and USO models improved, which decreased the normalized root mean square error (NRMSE) by 15.7%, 16.6% and 3.9%, respectively; however, the performance of the BBL model was negative, which increased the NRMSE by 5.3%. It was observed that the models of Jarvis, BWB, BBL and USO were applicable within different ranges of soil relative water content (i.e., 55%-65%, 56%-67%, 37%-79% and 37%-95%, respectively) based on the 95% confidence limits. Moreover, introducing a water response function, the applicability of the Jarvis and BWB models improved. The USO model performed best with or without introducing the water response function and was applicable under varying soil moisture conditions. Our results provide a basis for selecting appropriate stomatal conductance models under drought conditions. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Time Series Forecasting of Daily Reference Evapotranspiration by Neural Network Ensemble Learning for Irrigation System

    NASA Astrophysics Data System (ADS)

    Manikumari, N.; Murugappan, A.; Vinodhini, G.

    2017-07-01

    Time series forecasting has gained remarkable interest of researchers in the last few decades. Neural networks based time series forecasting have been employed in various application areas. Reference Evapotranspiration (ETO) is one of the most important components of the hydrologic cycle and its precise assessment is vital in water balance and crop yield estimation, water resources system design and management. This work aimed at achieving accurate time series forecast of ETO using a combination of neural network approaches. This work was carried out using data collected in the command area of VEERANAM Tank during the period 2004 - 2014 in India. In this work, the Neural Network (NN) models were combined by ensemble learning in order to improve the accuracy for forecasting Daily ETO (for the year 2015). Bagged Neural Network (Bagged-NN) and Boosted Neural Network (Boosted-NN) ensemble learning were employed. It has been proved that Bagged-NN and Boosted-NN ensemble models are better than individual NN models in terms of accuracy. Among the ensemble models, Boosted-NN reduces the forecasting errors compared to Bagged-NN and individual NNs. Regression co-efficient, Mean Absolute Deviation, Mean Absolute Percentage error and Root Mean Square Error also ascertain that Boosted-NN lead to improved ETO forecasting performance.

  8. Comparison of three-dimensional fluorescence analysis methods for predicting formation of trihalomethanes and haloacetic acids.

    PubMed

    Peleato, Nicolás M; Andrews, Robert C

    2015-01-01

    This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.

  9. A post audit of a model-designed ground water extraction system.

    PubMed

    Andersen, Peter F; Lu, Silong

    2003-01-01

    Model post audits test the predictive capabilities of ground water models and shed light on their practical limitations. In the work presented here, ground water model predictions were used to design an extraction/treatment/injection system at a military ammunition facility and then were re-evaluated using site-specific water-level data collected approximately one year after system startup. The water-level data indicated that performance specifications for the design, i.e., containment, had been achieved over the required area, but that predicted water-level changes were greater than observed, particularly in the deeper zones of the aquifer. Probable model error was investigated by determining the changes that were required to obtain an improved match to observed water-level changes. This analysis suggests that the originally estimated hydraulic properties were in error by a factor of two to five. These errors may have resulted from attributing less importance to data from deeper zones of the aquifer and from applying pumping test results to a volume of material that was larger than the volume affected by the pumping test. To determine the importance of these errors to the predictions of interest, the models were used to simulate the capture zones resulting from the originally estimated and updated parameter values. The study suggests that, despite the model error, the ground water model contributed positively to the design of the remediation system.

  10. A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND

    PubMed Central

    Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.

    2010-01-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159

  11. A derating method for therapeutic applications of high intensity focused ultrasound

    NASA Astrophysics Data System (ADS)

    Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.

    2010-05-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  12. A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND.

    PubMed

    Bessonova, O V; Khokhlova, V A; Canney, M S; Bailey, M R; Crum, L A

    2010-01-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  13. Determination of mean droplet sizes of water-in-oil emulsions using an Earth's field NMR instrument.

    PubMed

    Fridjonsson, Einar O; Flux, Louise S; Johns, Michael L

    2012-08-01

    The use of the Earth's magnetic field (EF) to conduct nuclear magnetic resonance (NMR) experiments has a long history with a growing list of applications (e.g. ground water detection, diffusion measurements of Antarctic sea ice). In this paper we explore whether EFNMR can be used to accurately and practically measure the mean droplet size () of water-in-oil emulsions (paraffin and crude oil). We use both pulsed field gradient (PFG) measurements of restricted self-diffusion and T₂ relaxometry, as appropriate. T₂ relaxometry allows the extension of droplet sizing ability below the limits set by the available magnetic field gradient strength of the EFNMR apparatus. A commercially available bench-top NMR spectrometer is used to verify the results obtained using the EFNMR instrument, with good agreement within experimental error, seen between the two instruments. These results open the potential for further investigation of the application of EFNMR for emulsion droplet sizing. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Influence of material surface on the scanning error of a powder-free 3D measuring system.

    PubMed

    Kurz, Michael; Attin, Thomas; Mehl, Albert

    2015-11-01

    This study aims to evaluate the accuracy of a powder-free three-dimensional (3D) measuring system (CEREC Omnicam, Sirona), when scanning the surface of a material at different angles. Additionally, the influence of water was investigated. Nine different materials were combined with human tooth surface (enamel) to create n = 27 specimens. These materials were: Controls (InCoris TZI and Cerec Guide Bloc), ceramics (Vitablocs® Mark II and IPS Empress CAD), metals (gold and amalgam) and composites (Tetric Ceram, Filtek Supreme A2B and A2E). The highly polished samples were scanned at different angles with and without water. The 216 scans were then analyzed and descriptive statistics were obtained. The height difference between the tooth and material surfaces, as measured with the 3D scans, ranged from 0.83 μm (±2.58 μm) to -14.79 μm (±3.45 μm), while the scan noise on the materials was between 3.23 μm (±0.79 μm) and 14.24 μm (±6.79 μm) without considering the control groups. Depending on the thickness of the water film, measurement errors in the order of 300-1,600 μm could be observed. The inaccuracies between the tooth and material surfaces, as well as the scan noise for the materials, were within the range of error for measurements used for conventional impressions and are therefore negligible. The presence of water, however, greatly affects the scan. The tested powder-free 3D measuring system can safely be used to scan different material surfaces without the prior application of a powder, although drying of the surface prior to scanning is highly advisable.

  15. Line shape parameters of the 22-GHz water line for accurate modeling in atmospheric applications

    NASA Astrophysics Data System (ADS)

    Koshelev, M. A.; Golubiatnikov, G. Yu.; Vilkov, I. N.; Tretyakov, M. Yu.

    2018-01-01

    The paper concerns refining parameters of one of the major atmospheric diagnostic lines of water vapor at 22 GHz. Two high resolution microwave spectrometers based on different principles of operation covering together the pressure range from a few milliTorr up to a few Torr were used. Special efforts were made to minimize possible sources of systematic measurement errors. Satisfactory self-consistency of the obtained data was achieved ensuring reliability of the obtained parameters. Collisional broadening and shifting parameters of the line in pure water vapor and in its mixture with air were determined at room temperature. Comparative analysis of the obtained parameters with previous data is given. The speed dependence effect impact on the line shape was evaluated.

  16. Assimilation of CryoSat-2 altimetry to a hydrodynamic model of the Brahmaputra river

    NASA Astrophysics Data System (ADS)

    Schneider, Raphael; Nygaard Godiksen, Peter; Ridler, Marc-Etienne; Madsen, Henrik; Bauer-Gottwein, Peter

    2016-04-01

    Remote sensing provides valuable data for parameterization and updating of hydrological models, for example water level measurements of inland water bodies from satellite radar altimeters. Satellite altimetry data from repeat-orbit missions such as Envisat, ERS or Jason has been used in many studies, also synthetic wide-swath altimetry data as expected from the SWOT mission. This study is one of the first hydrologic applications of altimetry data from a drifting orbit satellite mission, namely CryoSat-2. CryoSat-2 is equipped with the SIRAL instrument, a new type of radar altimeter similar to SRAL on Sentinel-3. CryoSat-2 SARIn level 2 data is used to improve a 1D hydrodynamic model of the Brahmaputra river basin in South Asia set up in the DHI MIKE 11 software. CryoSat-2 water levels were extracted over river masks derived from Landsat imagery. After discharge calibration, simulated water levels were fitted to the CryoSat-2 data along the Assam valley by adapting cross section shapes and datums. The resulting hydrodynamic model shows accurate spatio-temporal representation of water levels, which is a prerequisite for real-time model updating by assimilation of CryoSat-2 altimetry or multi-mission data in general. For this task, a data assimilation framework has been developed and linked with the MIKE 11 model. It is a flexible framework that can assimilate water level data which are arbitrarily distributed in time and space. Different types of error models, data assimilation methods, etc. can easily be used and tested. Furthermore, it is not only possible to update the water level of the hydrodynamic model, but also the states of the rainfall-runoff models providing the forcing of the hydrodynamic model. The setup has been used to assimilate CryoSat-2 observations over the Assam valley for the years 2010 to 2013. Different data assimilation methods and localizations were tested, together with different model error representations. Furthermore, the impact of different filtering and clustering methods and error descriptions of the CryoSat-2 observations was evaluated. Performance improvement in terms of discharge and water level forecast due to the assimilation of satellite altimetry data was then evaluated. The model forecasts were also compared to climatology and persistence forecasts. Using ensemble based filters, the evaluation was done not only based on performance criteria for the central forecast such as root-mean-square error (RMSE) and Nash-Sutcliffe model efficiency (NSE), but also based on sharpness, reliability and continuous ranked probability score (CRPS) of the ensemble of probabilistic forecasts.

  17. Analyzing the errors of DFT approximations for compressed water systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less

  18. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  19. Uncertainty in predicting soil hydraulic properties at the hillslope scale with indirect methods

    NASA Astrophysics Data System (ADS)

    Chirico, G. B.; Medina, H.; Romano, N.

    2007-02-01

    SummarySeveral hydrological applications require the characterisation of the soil hydraulic properties at large spatial scales. Pedotransfer functions (PTFs) are being developed as simplified methods to estimate soil hydraulic properties as an alternative to direct measurements, which are unfeasible for most practical circumstances. The objective of this study is to quantify the uncertainty in PTFs spatial predictions at the hillslope scale as related to the sampling density, due to: (i) the error in estimated soil physico-chemical properties and (ii) PTF model error. The analysis is carried out on a 2-km-long experimental hillslope in South Italy. The method adopted is based on a stochastic generation of patterns of soil variables using sequential Gaussian simulation, conditioned to the observed sample data. The following PTFs are applied: Vereecken's PTF [Vereecken, H., Diels, J., van Orshoven, J., Feyen, J., Bouma, J., 1992. Functional evaluation of pedotransfer functions for the estimation of soil hydraulic properties. Soil Sci. Soc. Am. J. 56, 1371-1378] and HYPRES PTF [Wösten, J.H.M., Lilly, A., Nemes, A., Le Bas, C., 1999. Development and use of a database of hydraulic properties of European soils. Geoderma 90, 169-185]. The two PTFs estimate reliably the soil water retention characteristic even for a relatively coarse sampling resolution, with prediction uncertainties comparable to the uncertainties in direct laboratory or field measurements. The uncertainty of soil water retention prediction due to the model error is as much as or more significant than the uncertainty associated with the estimated input, even for a relatively coarse sampling resolution. Prediction uncertainties are much more important when PTF are applied to estimate the saturated hydraulic conductivity. In this case model error dominates the overall prediction uncertainties, making negligible the effect of the input error.

  20. Accounting for uncertainty in pedotransfer functions in vulnerability assessments of pesticide leaching to groundwater.

    PubMed

    Stenemo, Fredrik; Jarvis, Nicholas

    2007-09-01

    A simulation tool for site-specific vulnerability assessments of pesticide leaching to groundwater was developed, based on the pesticide fate and transport model MACRO, parameterized using pedotransfer functions and reasonable worst-case parameter values. The effects of uncertainty in the pedotransfer functions on simulation results were examined for 48 combinations of soils, pesticides and application timings, by sampling pedotransfer function regression errors and propagating them through the simulation model in a Monte Carlo analysis. An uncertainty factor, f(u), was derived, defined as the ratio between the concentration simulated with no errors, c(sim), and the 80th percentile concentration for the scenario. The pedotransfer function errors caused a large variation in simulation results, with f(u) ranging from 1.14 to 1440, with a median of 2.8. A non-linear relationship was found between f(u) and c(sim), which can be used to account for parameter uncertainty by correcting the simulated concentration, c(sim), to an estimated 80th percentile value. For fine-textured soils, the predictions were most sensitive to errors in the pedotransfer functions for two parameters regulating macropore flow (the saturated matrix hydraulic conductivity, K(b), and the effective diffusion pathlength, d) and two water retention function parameters (van Genuchten's N and alpha parameters). For coarse-textured soils, the model was also sensitive to errors in the exponent in the degradation water response function and the dispersivity, in addition to K(b), but showed little sensitivity to d. To reduce uncertainty in model predictions, improved pedotransfer functions for K(b), d, N and alpha would therefore be most useful. 2007 Society of Chemical Industry

  1. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjing; Dobslaw, Henryk; Dahle, Christoph; Thomas, Maik; Neumayer, Karl-Hans; Flechtner, Frank

    2017-04-01

    By operating for more than one decade now, the GRACE satellite provides valuable information on the total water storage (TWS) for hydrological and hydro-meteorological applications. The increasing interest in use of the GRACE-based TWS requires an in-depth assessment of the reliability of the outputs and also its uncertainties. Through years of development, different post-processing methods have been suggested for TWS estimation. However, since GRACE offers an unique way to provide high spatial and temporal scale TWS, there is no global ground truth data available to fully validate the results. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-type gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Three non-isotropic filter methods from Kusche (2007) and a combined filter from DDK1 and DDK3 based on the ground tracks are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-type TWS estimates to correct the bias and leakage. Time variant rescaling factors as monthly scaling factors and scaling factors for seasonal and long-term variations separately are investigated as well. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment (Zhang et al., 2016) and will subsequently recommend a processing strategy that shall also be applied for planned GRACE and GRACE-FO Level-3 products for terrestrial applications provided by GFZ. Kusche, J., 2007:Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Zhang L, Dobslaw H, Thomas M (2016) Globally gridded terrestrial water storage variations from GRACE satellite gravimetry for hydrometeorological applications. Geophysical Journal International 206(1):368-378, DOI 10.1093/gji/ggw153.

  2. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjing; Dahle, Christoph; Neumayer, Karl-Hans; Dobslaw, Henryk; Flechtner, Frank; Thomas, Maik

    2016-04-01

    Terrestrial water storage (TWS) variations obtained from GRACE play an increasingly important role in various hydrological and hydro-meteorological applications. Since monthly-mean gravity fields are contaminated by errors caused by a number of sources with distinct spatial correlation structures, filtering is needed to remove in particular high frequency noise. Subsequently, bias and leakage caused by the filtering need to be corrected before the final results are interpreted as GRACE-based observations of TWS. Knowledge about the reliability and performance of different post-processing methods is highly important for the GRACE users. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-like gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Two non-isotropic filter methods from Kusche (2007) and Swenson and Wahr (2006) are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-like TWS estimates to correct the bias and leakage. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment and will subsequently recommend a processing strategy that shall also be applied to planned GRACE and GRACE-FO Level-3 products for hydrological applications provided by GFZ. Kusche, J. (2007): Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Swenson, S. and Wahr, J. (2006): Post-processing removal of correlated errors in GRACE data. Geophysical Research Letters, 33(8):L08402.

  3. A path reconstruction method integrating dead-reckoning and position fixes applied to humpback whales.

    PubMed

    Wensveen, Paul J; Thomas, Len; Miller, Patrick J O

    2015-01-01

    Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.

  4. Machine learning models for lipophilicity and their domain of applicability.

    PubMed

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-01-01

    Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.

  5. Developing a Suitable Model for Water Uptake for Biodegradable Polymers Using Small Training Sets.

    PubMed

    Valenzuela, Loreto M; Knight, Doyle D; Kohn, Joachim

    2016-01-01

    Prediction of the dynamic properties of water uptake across polymer libraries can accelerate polymer selection for a specific application. We first built semiempirical models using Artificial Neural Networks and all water uptake data, as individual input. These models give very good correlations (R (2) > 0.78 for test set) but very low accuracy on cross-validation sets (less than 19% of experimental points within experimental error). Instead, using consolidated parameters like equilibrium water uptake a good model is obtained (R (2) = 0.78 for test set), with accurate predictions for 50% of tested polymers. The semiempirical model was applied to the 56-polymer library of L-tyrosine-derived polyarylates, identifying groups of polymers that are likely to satisfy design criteria for water uptake. This research demonstrates that a surrogate modeling effort can reduce the number of polymers that must be synthesized and characterized to identify an appropriate polymer that meets certain performance criteria.

  6. Evaluating Snow Data Assimilation Framework for Streamflow Forecasting Applications Using Hindcast Verification

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2012-12-01

    Snow water equivalent (SWE) estimation is a key factor in producing reliable streamflow simulations and forecasts in snow dominated areas. However, measuring or predicting SWE has significant uncertainty. Sequential data assimilation, which updates states using both observed and modeled data based on error estimation, has been shown to reduce streamflow simulation errors but has had limited testing for forecasting applications. In the current study, a snow data assimilation framework integrated with the National Weather System River Forecasting System (NWSRFS) is evaluated for use in ensemble streamflow prediction (ESP). Seasonal water supply ESP hindcasts are generated for the North Fork of the American River Basin (NFARB) in northern California. Parameter sets from the California Nevada River Forecast Center (CNRFC), the Differential Evolution Adaptive Metropolis (DREAM) algorithm and the Multistep Automated Calibration Scheme (MACS) are tested both with and without sequential data assimilation. The traditional ESP method considers uncertainty in future climate conditions using historical temperature and precipitation time series to generate future streamflow scenarios conditioned on the current basin state. We include data uncertainty analysis in the forecasting framework through the DREAM-based parameter set which is part of a recently developed Integrated Uncertainty and Ensemble-based data Assimilation framework (ICEA). Extensive verification of all tested approaches is undertaken using traditional forecast verification measures, including root mean square error (RMSE), Nash-Sutcliffe efficiency coefficient (NSE), volumetric bias, joint distribution, rank probability score (RPS), and discrimination and reliability plots. In comparison to the RFC parameters, the DREAM and MACS sets show significant improvement in volumetric bias in flow. Use of assimilation improves hindcasts of higher flows but does not significantly improve performance in the mid flow and low flow categories.

  7. Temperature measurement reliability and validity with thermocouple extension leads or changing lead temperature.

    PubMed

    Jutte, Lisa S; Long, Blaine C; Knight, Kenneth L

    2010-01-01

    Thermocouples' leads are often too short, necessitating the use of an extension lead. To determine if temperature measures were influenced by extension-lead use or lead temperature changes. Descriptive laboratory study. Laboratory. Experiment 1: 10 IT-21 thermocouples and 5 extension leads. Experiment 2: 5 IT-21 and PT-6 thermocouples. In experiment 1, temperature data were collected on 10 IT-21 thermocouples in a stable water bath with and without extension leads. In experiment 2, temperature data were collected on 5 IT-21 and PT-6 thermocouples in a stable water bath before, during, and after ice-pack application to extension leads. In experiment 1, extension leads did not influence IT-21 validity (P  =  .45) or reliability (P  =  .10). In experiment 2, postapplication IT-21 temperatures were greater than preapplication and application measures (P < .05). Extension leads had no influence on temperature measures. Ice application to leads may increase measurement error.

  8. Stochastic error model corrections to improve the performance of bottom-up precipitation products for hydrologic applications

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.

    2016-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for characterizing the error in precipitation by SM2RAIN would be highly useful for the merging and the integration steps in its algorithm, i.e., the merging of multiple soil moisture derived products (e.g., SMAP, SMOS, ASCAT) and the integration of soil moisture derived and state of the art satellite precipitation products (e.g., GPM IMERG).

  9. Wind wave prediction in shallow water: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavaleri, L.; Rizzoli, P.M.

    1981-11-20

    A wind wave forecasting model is described, based upon the ray technique, which is specifically designed for shallow water areas. The model explicitly includes wave generation, refraction, and shoaling, while nonlinear dissipative processes (breaking and bottom fricton) are introduced through a suitable parametrization. The forecast is provided at a specified time and target position, in terms of a directional spectrum, from which the one-dimensional spectrum and the significant wave height are derived. The model has been used to hindcast storms both in shallow water (Northern Adriatic Sea) and in deep water conditions (Tyrrhenian Sea). The results have been compared withmore » local measurements, and the rms error for the significant wave height is between 10 and 20%. A major problems has been found in the correct evaluation of the wind field.« less

  10. Calibration and application of an automated seepage meter for monitoring water flow across the sediment-water interface.

    PubMed

    Zhu, Tengyi; Fu, Dafang; Jenkinson, Byron; Jafvert, Chad T

    2015-04-01

    The advective flow of sediment pore water is an important parameter for understanding natural geochemical processes within lake, river, wetland, and marine sediments and also for properly designing permeable remedial sediment caps placed over contaminated sediments. Automated heat pulse seepage meters can be used to measure the vertical component of sediment pore water flow (i.e., vertical Darcy velocity); however, little information on meter calibration as a function of ambient water temperature exists in the literature. As a result, a method with associated equations for calibrating a heat pulse seepage meter as a function of ambient water temperature is fully described in this paper. Results of meter calibration over the temperature range 7.5 to 21.2 °C indicate that errors in accuracy are significant if proper temperature-dependence calibration is not performed. The proposed calibration method allows for temperature corrections to be made automatically in the field at any ambient water temperature. The significance of these corrections is discussed.

  11. The Surface Water and Ocean Topography Satellite Mission - An Assessment of Swath Altimetry Measurements of River Hydrodynamics

    NASA Technical Reports Server (NTRS)

    Wilson, Matthew D.; Durand, Michael; Alsdorf, Douglas; Chul-Jung, Hahn; Andreadis, Konstantinos M.; Lee, Hyongki

    2012-01-01

    The Surface Water and Ocean Topography (SWOT) satellite mission, scheduled for launch in 2020 with development commencing in 2015, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations, which will allow for the estimation of river and floodplain flows via the water surface slope. In this paper, we characterize the measurements which may be obtained from SWOT and illustrate how they may be used to derive estimates of river discharge. In particular, we show (i) the spatia-temporal sampling scheme of SWOT, (ii) the errors which maybe expected in swath altimetry measurements of the terrestrial surface water, and (iii) the impacts such errors may have on estimates of water surface slope and river discharge, We illustrate this through a "virtual mission" study for a approximately 300 km reach of the central Amazon river, using a hydraulic model to provide water surface elevations according to the SWOT spatia-temporal sampling scheme (orbit with 78 degree inclination, 22 day repeat and 140 km swath width) to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. Water surface elevation measurements for the Amazon mainstem as may be observed by SWOT were thereby obtained. Using these measurements, estimates of river slope and discharge were derived and compared to those which may be obtained without error, and those obtained directly from the hydraulic model. It was found that discharge can be reproduced highly accurately from the water height, without knowledge of the detailed channel bathymetry using a modified Manning's equation, if friction, depth, width and slope are known. Increasing reach length was found to be an effective method to reduce systematic height error in SWOT measurements.

  12. The effect of the dynamic wet troposphere on VLBI measurements

    NASA Technical Reports Server (NTRS)

    Treuhaft, R. N.; Lanyi, G. E.

    1986-01-01

    Calculations using a statistical model of water vapor fluctuations yield the effect of the dynamic wet troposphere on Very Long Baseline Interferometry (VLBI) measurements. The statistical model arises from two primary assumptions: (1) the spatial structure of refractivity fluctuations can be closely approximated by elementary (Kolmogorov) turbulence theory, and (2) temporal fluctuations are caused by spatial patterns which are moved over a site by the wind. The consequences of these assumptions are outlined for the VLBI delay and delay rate observables. For example, wet troposphere induced rms delays for Deep Space Network (DSN) VLBI at 20-deg elevation are about 3 cm of delay per observation, which is smaller, on the average, than other known error sources in the current DSN VLBI data set. At 20-deg elevation for 200-s time intervals, water vapor induces approximately 1.5 x 10 to the minus 13th power s/s in the Allan standard deviation of interferometric delay, which is a measure of the delay rate observable error. In contrast to the delay error, the delay rate measurement error is dominated by water vapor fluctuations. Water vapor induced VLBI parameter errors and correlations are calculated. For the DSN, baseline length parameter errors due to water vapor fluctuations are in the range of 3 to 5 cm. The above physical assumptions also lead to a method for including the water vapor fluctuations in the parameter estimation procedure, which is used to extract baseline and source information from the VLBI observables.

  13. Estimation of the optical errors on the luminescence imaging of water for proton beam

    NASA Astrophysics Data System (ADS)

    Yabe, Takuya; Komori, Masataka; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi

    2018-04-01

    Although luminescence imaging of water during proton-beam irradiation can be applied to range estimation, the height of the Bragg peak of the luminescence image was smaller than that measured with an ionization chamber. We hypothesized that the reasons of the difference were attributed to the optical phenomena; parallax errors of the optical system and the reflection of the luminescence from the water phantom. We estimated the errors cause by these optical phenomena affecting the luminescence image of water. To estimate the parallax error on the luminescence images, we measured the luminescence images during proton-beam irradiation using a cooled charge-coupled camera by changing the heights of the optical axis of the camera from those of the Bragg peak. When the heights of the optical axis matched to the depths of the Bragg peak, the Bragg peak heights in the depth profiles were the highest. The reflection of the luminescence of water with a black wall phantom was slightly smaller than that with a transparent phantom and changed the shapes of the depth profiles. We conclude that the parallax error significantly affects the heights of the Bragg peak and the reflection of the phantom affects the shapes of depth profiles of the luminescence images of water.

  14. Fault Tolerance Middleware for a Multi-Core System

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.

    2012-01-01

    Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.

  15. Determination of boron in produced water using the carminic acid assay.

    PubMed

    Floquet, Cedric F A; Sieben, Vincent J; MacKay, Bruce A; Mostowfi, Farshid

    2016-04-01

    Using the carminic acid assay, we determined the concentration of boron in oilfield waters. We investigated the effect of high concentrations of salts and dissolved metals on the assay performance. The influence of temperature, development time, reagent concentration, and water volume was studied. Ten produced and flowback water samples of different origins were measured, and the method was successfully validated against ICP-MS measurements. In water-stressed regions, produced water is a potential source of fresh water for irrigation, industrial applications, or consumption. Therefore, boron concentration must be determined and controlled to match the envisaged waste water reuse. Fast, precise, and onsite measurements are needed to minimize errors introduced by sample transportation to laboratories. We found that the optimum conditions for our application were a 5:1 mixing volume ratio (reagent to sample), a 1 g L(-1) carminic acid concentration in 99.99% sulfuric acid, and a 30 min reaction time at ambient temperature (20 °C to 23 °C). Absorption values were best measured at 610 nm and 630 nm and baseline corrected at 865 nm. Under these conditions, the sensitivity of the assay to boron was maximized while its cross-sensitivity to dissolved titanium, iron, barium and zirconium was minimized, alleviating the need for masking agents and extraction methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Electrical and magnetic properties of rock and soil

    USGS Publications Warehouse

    Scott, J.H.

    1983-01-01

    Field and laboratory measurements have been made to determine the electrical conductivity, dielectric constant, and magnetic permeability of rock and soil in areas of interest in studies of electromagnetic pulse propagation. Conductivity is determined by making field measurements of apparent resisitivity at very low frequencies (0-20 cps), and interpreting the true resistivity of layers at various depths by curve-matching methods. Interpreted resistivity values are converted to corresponding conductivity values which are assumed to be applicable at 10^2 cps, an assumption which is considered valid because the conductivity of rock and soil is nearly constant at frequencies below 10^2 cps. Conductivity is estimated at higher frequencies (up to 10^6 cps) by using statistical correlations of three parameters obtained from laboratory measurements of rock and soil samples: conductivity at 10^2 cps, frequency and conductivity measured over the range 10^2 to 10^6 cps. Conductivity may also be estimated in this frequency range by using field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and conductivity measured over the range 10^2 to 10^6 cps. This method is less accurate because nonrandom variation of ion concentration in natural pore water introduces error. Dielectric constant is estimated in a similar manner from field-derived conductivity values applicable at 10^2 cps and statistical correlations of three parameters obtained from laboratory measurements of samples: conductivity measured at 10^2 cps, frequency, and dielectric constant measured over the frequency range 10^2 to 10^6 cps. Dielectric constant may also be estimated from field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and dielectric constant measured from 10^2 to 10^6 cps, but again, this method is less accurate because of variation of ion concentration of pore water. Special laboratory procedures are used to measure conductivity and dielectric constant of rock and soil samples. Electrode polarization errors are minimized by using an electrode system that is electrochemically reversible-with ions in pore water.

  17. Impact of Forecast and Model Error Correlations In 4dvar Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zupanski, M.; Zupanski, D.; Vukicevic, T.; Greenwald, T.; Eis, K.; Vonder Haar, T.

    A weak-constraint 4DVAR data assimilation system has been developed at Cooper- ative Institute for Research in the Atmosphere (CIRA), Colorado State University. It is based on the NCEP's ETA 4DVAR system, and it is fully parallel (MPI coding). The CIRA's 4DVAR system is aimed for satellite data assimilation research, with cur- rent focus on assimilation of cloudy radiances and microwave satellite measurements. Most important improvement over the previous 4DVAR system is a degree of gener- ality introduced into the new algorithm, namely for applications with different NWP models (e.g., RAMS, WRF, ETA, etc.), and for the choice of control variable. In cur- rent applications, the non-hydrostatic RAMS model and its adjoint are used, including all microphysical processess. The control variable includes potential temperature, ve- locity potential and stream function, vertical velocity, and seven mixing ratios with respect to all water phases. Since the statistics of the microphysical components of the control variable is not well known, a special attention will be paid to the impact of the forecast and model (prior) error correlations on the 4DVAR analysis. In particular, the sensitivity of the analysis with respect to decorrelation length will be examined. The prior error covariances are modelled using the compactly-supported, space-limited correlations developed at NASA DAO.

  18. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    NASA Astrophysics Data System (ADS)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross-sectional averaging and the use of shorter reach lengths) and higher water-surface slopes (reducing the proportional impact of slope errors on discharge calculation).

  19. Mechanism of ion adsorption to aqueous interfaces: Graphene/water vs. air/water.

    PubMed

    McCaffrey, Debra L; Nguyen, Son C; Cox, Stephen J; Weller, Horst; Alivisatos, A Paul; Geissler, Phillip L; Saykally, Richard J

    2017-12-19

    The adsorption of ions to aqueous interfaces is a phenomenon that profoundly influences vital processes in many areas of science, including biology, atmospheric chemistry, electrical energy storage, and water process engineering. Although classical electrostatics theory predicts that ions are repelled from water/hydrophobe (e.g., air/water) interfaces, both computer simulations and experiments have shown that chaotropic ions actually exhibit enhanced concentrations at the air/water interface. Although mechanistic pictures have been developed to explain this counterintuitive observation, their general applicability, particularly in the presence of material substrates, remains unclear. Here we investigate ion adsorption to the model interface formed by water and graphene. Deep UV second harmonic generation measurements of the SCN - ion, a prototypical chaotrope, determined a free energy of adsorption within error of that for air/water. Unlike for the air/water interface, wherein repartitioning of the solvent energy drives ion adsorption, our computer simulations reveal that direct ion/graphene interactions dominate the favorable enthalpy change. Moreover, the graphene sheets dampen capillary waves such that rotational anisotropy of the solute, if present, is the dominant entropy contribution, in contrast to the air/water interface.

  20. Soil moisture assimilation using a modified ensemble transform Kalman filter with water balance constraint

    NASA Astrophysics Data System (ADS)

    Wu, Guocan; Zheng, Xiaogu; Dan, Bo

    2016-04-01

    The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.

  1. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  2. An Investigation into Soft Error Detection Efficiency at Operating System Level

    PubMed Central

    Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894

  3. An investigation into soft error detection efficiency at operating system level.

    PubMed

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  4. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  5. Application of current and future satellite missions to hydrologic prediction in transboundary rivers

    NASA Astrophysics Data System (ADS)

    Biancamaria, S.; Clark, E.; Lettenmaier, D. P.

    2010-12-01

    More than 256 major global river basins, which cover about 45% of the continental land surface, are shared among two or more countries. The flow of such a large part of the global runoff across international boundaries has led to tension in many cases between upstream and downstream riparian countries. Among many examples, this is the case of the Ganges and the Brahmaputra Rivers, which cross the boundary between India and Bangladesh. Hydrological data (river discharge, reservoir storage) are viewed as sensitive by India (the upstream country) and are therefore not shared with Bangladesh, which can only monitor river discharge and water depth at the international border crossing. These measurements only allow forecasting of floods in the interior and southern portions of the country two to three days in advance. These forecasts are not long enough either for agricultural water management purposes (for which knowledge of upstream reservoir storage is essential) or for disaster preparedness purposes. Satellite observations of river spatial extent, surface slope, reservoir area and surface elevation have the potential to make tremendous changes in management of water within the basins. In this study, we examine the use of currently available satellite measurements (in India) and in-situ measurements in Bangladesh to increase forecast lead time in the Ganges and Brahmaputra Rivers. Using nadir altimeters, we find that it is possible to forecast the discharge of the Ganges River at the Bangladesh border with lead time 3 days and mean absolute error of around 25%. On the Ganges River, 2-day forecasts are possible with a mean absolute error of around 20%. When combined with optical/infra-red MODIS images, it is possible to map water elevations along the river and its floodplain upstream of the boundary, and to compute water storage. However, the high frequency of clouds in this region results in relatively large errors in the water mask. Due to the nadir altimeter temporal repeat (10 days for current satellites) and to gaps in the water mask, water volume estimates are meaningful only at the monthly scale. Furthermore, this information is limited to channels with wider than 250-500 m. The future Surface Water and Ocean Topography (SWOT) mission, which is intended to be launched in 2020, will provide global maps of water elevations, with a spatial resolution of 100 m and errors on the water elevation equal to or below 10 cm. The SWOT Ka band interferometric Synthetic Aperture Radar (SAR), will not be affected by cloud cover (aside from infrequent heavy rain); therefore, estimation of the water volume change on the Ganges and on the Brahmaputra upstream to the Bangladesh provided by SWOT should be much more accurate in space and time than can currently be achieved. We discuss the implications of future SWOT observations in the context of our preliminary work on the Ganges-Brahmaputra Rivers using current generation satellite data.

  6. Development and Evaluation of a Spectral Analysis Method to Eliminate Organic Interference with Cavity Ring-Down Measurements of Water Isotope Ratios.

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Kim-Hak, D.; Popp, B. N.; Wallsgrove, N.; Kagawa-Viviani, A.; Johnson, J.

    2017-12-01

    Cavity ring-down spectroscopy (CRDS) is a technology based on the spectral absorption of gas molecules of interest at specific spectral regions. The CRDS technique enables the analysis of hydrogen and oxygen stable isotope ratios of water by directly measuring individual isotopologue absorption peaks such as H16OH, H18OH, and D16OH. Early work demonstrated that the accuracy of isotope analysis by CRDS and other laser-based absorption techniques could be compromised by spectral interference from organic compounds, in particular methanol and ethanol, which can be prevalent in ecologically-derived waters. There have been several methods developed by various research groups including Picarro to address the organic interference challenge. Here, we describe an organic fitter and a post-processing algorithm designed to improve the accuracy of the isotopic analysis of the "organic contaminated" water specifically for Picarro models L2130-i and L2140-i. To create the organic fitter, the absorption features of methanol around 7200 cm-1 were characterized and incorporated into spectral analysis. Since there was residual interference remaining after applying the organic fitter, a statistical model was also developed for post-processing correction. To evaluate the performance of the organic fitter and the postprocessing correction, we conducted controlled experiments on the L2130-i for two water samples with different isotope ratios blended with varying amounts of methanol (0-0.5%) and ethanol (0-5%). When the original fitter was not used for spectral analysis, the addition of 0.5% methanol changed the apparent isotopic composition of the water samples by +62‰ for δ18O values and +97‰ for δ2H values, and the addition of 5% ethanol changed the apparent isotopic composition by -0.5‰ for δ18O values and -3‰ for δ2H values. When the organic fitter was used for spectral analysis, the maximum methanol-induced errors were reduced to +4‰ for δ18O values and +5‰ for δ2H values, and the maximum ethanol-induced errors were unchanged. When the organic fitter was combined with the post-processing correction, up to 99.8% of the total methanol-induced errors and 96% of the total ethanol-induced errors could be corrected. The applicability of the algorithm to natural samples such as plant and soil waters will be investigated.

  7. A multiphysical ensemble system of numerical snow modelling

    NASA Astrophysics Data System (ADS)

    Lafaysse, Matthieu; Cluzet, Bertrand; Dumont, Marie; Lejeune, Yves; Vionnet, Vincent; Morin, Samuel

    2017-05-01

    Physically based multilayer snowpack models suffer from various modelling errors. To represent these errors, we built the new multiphysical ensemble system ESCROC (Ensemble System Crocus) by implementing new representations of different physical processes in the deterministic coupled multilayer ground/snowpack model SURFEX/ISBA/Crocus. This ensemble was driven and evaluated at Col de Porte (1325 m a.s.l., French alps) over 18 years with a high-quality meteorological and snow data set. A total number of 7776 simulations were evaluated separately, accounting for the uncertainties of evaluation data. The ability of the ensemble to capture the uncertainty associated to modelling errors is assessed for snow depth, snow water equivalent, bulk density, albedo and surface temperature. Different sub-ensembles of the ESCROC system were studied with probabilistic tools to compare their performance. Results show that optimal members of the ESCROC system are able to explain more than half of the total simulation errors. Integrating members with biases exceeding the range corresponding to observational uncertainty is necessary to obtain an optimal dispersion, but this issue can also be a consequence of the fact that meteorological forcing uncertainties were not accounted for. The ESCROC system promises the integration of numerical snow-modelling errors in ensemble forecasting and ensemble assimilation systems in support of avalanche hazard forecasting and other snowpack-modelling applications.

  8. Refraction corrected calibration for aquatic locomotion research: application of Snell's law improves spatial accuracy.

    PubMed

    Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L

    2015-07-07

    Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.

  9. Quality evaluation of frozen guava and yellow passion fruit pulps by NIR spectroscopy and chemometrics.

    PubMed

    Alamar, Priscila D; Caramês, Elem T S; Poppi, Ronei J; Pallone, Juliana A L

    2016-07-01

    The present study investigated the application of near infrared spectroscopy as a green, quick, and efficient alternative to analytical methods currently used to evaluate the quality (moisture, total sugars, acidity, soluble solids, pH and ascorbic acid) of frozen guava and passion fruit pulps. Fifty samples were analyzed by near infrared spectroscopy (NIR) and reference methods. Partial least square regression (PLSR) was used to develop calibration models to relate the NIR spectra and the reference values. Reference methods indicated adulteration by water addition in 58% of guava pulp samples and 44% of yellow passion fruit pulp samples. The PLS models produced lower values of root mean squares error of calibration (RMSEC), root mean squares error of prediction (RMSEP), and coefficient of determination above 0.7. Moisture and total sugars presented the best calibration models (RMSEP of 0.240 and 0.269, respectively, for guava pulp; RMSEP of 0.401 and 0.413, respectively, for passion fruit pulp) which enables the application of these models to determine adulteration in guava and yellow passion fruit pulp by water or sugar addition. The models constructed for calibration of quality parameters of frozen fruit pulps in this study indicate that NIR spectroscopy coupled with the multivariate calibration technique could be applied to determine the quality of guava and yellow passion fruit pulp. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Quantitative analysis of urea in human urine and serum by 1H nuclear magnetic resonance†

    PubMed Central

    Liu, Lingyan; Mo, Huaping; Wei, Siwei

    2016-01-01

    A convenient and fast method for quantifying urea in biofluids is demonstrated using NMR analysis and the solvent water signal as a concentration reference. The urea concentration can be accurately determined with errors less than 3% between 1 mM and 50 mM, and less than 2% above 50 mM in urine and serum. The method is promising for various applications with advantages of simplicity, high accuracy, and fast non-destructive detection. With an ability to measure other metabolites simultaneously, this NMR method is also likely to find applications in metabolic profiling and system biology. PMID:22179722

  11. Efficient wetland surface water detection and monitoring via Landsat: Comparison with in situ data from the Everglades Depth Estimation Network

    USGS Publications Warehouse

    Jones, John W.

    2015-01-01

    The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE), is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN) was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral instruments. Although no other sites have such an extensive in situ network or long-term records, the broader applicability of this and other candidate DSWE algorithms is being evaluated in other wetlands using this work as a guide. Continued interaction among DSWE producers and potential users will help determine whether the measured accuracies are adequate for practical utility in resource management.

  12. An integrated model for the assessment of global water resources Part 2: Applications and assessments

    NASA Astrophysics Data System (ADS)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.

    2008-07-01

    To assess global water resources from the perspective of subannual variation in water availability and water use, an integrated water resources model was developed. In a companion report, we presented the global meteorological forcing input used to drive the model and six modules, namely, the land surface hydrology module, the river routing module, the crop growth module, the reservoir operation module, the environmental flow requirement module, and the anthropogenic withdrawal module. Here, we present the results of the model application and global water resources assessments. First, the timing and volume of simulated agriculture water use were examined because agricultural use composes approximately 85% of total consumptive water withdrawal in the world. The estimated crop calendar showed good agreement with earlier reports for wheat, maize, and rice in major countries of production. In major countries, the error in the planting date was ±1 mo, but there were some exceptional cases. The estimated irrigation water withdrawal also showed fair agreement with country statistics, but tended to be underestimated in countries in the Asian monsoon region. The results indicate the validity of the model and the input meteorological forcing because site-specific parameter tuning was not used in the series of simulations. Finally, global water resources were assessed on a subannual basis using a newly devised index. This index located water-stressed regions that were undetected in earlier studies. These regions, which are indicated by a gap in the subannual distribution of water availability and water use, include the Sahel, the Asian monsoon region, and southern Africa. The simulation results show that the reservoir operations of major reservoirs (>1 km3) and the allocation of environmental flow requirements can alter the population under high water stress by approximately -11% to +5% globally. The integrated model is applicable to assessments of various global environmental projections such as climate change.

  13. Detailed validation of the bidirectional effect in various Case 1 waters for application to Ocean Color imagery

    NASA Astrophysics Data System (ADS)

    Voss, K. J.; Morel, A.; Antoine, D.

    2007-06-01

    The radiance viewed from the ocean depends on the illumination and viewing geometry along with the water properties and this variation is called the bidirectional effect, or BRDF of the water. This BRDF depends on the inherent optical properties of the water, including the volume scattering function, and is important when comparing data from different satellite sensors. The current model by Morel et al. (2002) depends on modeled water parameters, thus must be carefully validated. In this paper we combined upwelling radiance distribution data from several cruises, in varied water types and with a wide range of solar zenith angles. We found that the average error of the model, when compared to the data was less than 1%, while the RMS difference between the model and data was on the order of 0.02-0.03. This is well within the statistical noise of the data, which was on the order of 0.04-0.05, due to environmental noise sources such as wave focusing.

  14. Developing Automatic Water Table Control System for Reducing Greenhouse Gas Emissions from Paddy Fields

    NASA Astrophysics Data System (ADS)

    Arif, C.; Fauzan, M. I.; Satyanto, K. S.; Budi, I. S.; Masaru, M.

    2018-05-01

    Water table in rice fields play important role to mitigate greenhouse gas (GHG) emissions from paddy fields. Continuous flooding by maintenance water table 2-5 cm above soil surface is not effective and release more GHG emissions. System of Rice Intensification (SRI) as alternative rice farming apply intermittent irrigation by maintaining lower water table is proven can reduce GHG emissions reducing productivity significantly. The objectives of this study were to develop automatic water table control system for SRI application and then evaluate the performances. The control system was developed based on fuzzy logic algorithms using the mini PC of Raspberry Pi. Based on laboratory and field tests, the developed system was working well as indicated by lower MAPE (mean absolute percentage error) values. MAPE values for simulation and field tests were 16.88% and 15.80%, respectively. This system can save irrigation water up to 42.54% without reducing productivity significantly when compared to manual irrigation systems.

  15. Gradient, contact-free volume transfers minimize compound loss in dose-response experiments.

    PubMed

    Harris, David; Olechno, Joe; Datwani, Sammy; Ellson, Richard

    2010-01-01

    More accurate dose-response curves can be constructed by eliminating aqueous serial dilution of compounds. Traditional serial dilutions that use aqueous diluents can result in errors in dose-response values of up to 4 orders of magnitude for a significant percentage of a compound library. When DMSO is used as the diluent, the errors are reduced but not eliminated. The authors use acoustic drop ejection (ADE) to transfer different volumes of model library compounds, directly creating a concentration gradient series in the receiver assay plate. Sample losses and contamination associated with compound handling are therefore avoided or minimized, particularly in the case of less water-soluble compounds. ADE is particularly well suited for assay miniaturization, but gradient volume dispensing is not limited to miniaturized applications.

  16. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1997-01-01

    We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a testbed for the use of the distortion representation of forecast errors, (2) act as one means of validating the GEOS data assimilation system and (3) help to describe the impact of the ERS 1 scatterometer data.

  17. Viterbi equalization for long-distance, high-speed underwater laser communication

    NASA Astrophysics Data System (ADS)

    Hu, Siqi; Mi, Le; Zhou, Tianhua; Chen, Weibiao

    2017-07-01

    In long-distance, high-speed underwater laser communication, because of the strong absorption and scattering processes, the laser pulse is stretched with the increase in communication distance and the decrease in water clarity. The maximum communication bandwidth is limited by laser-pulse stretching. Improving the communication rate increases the intersymbol interference (ISI). To reduce the effect of ISI, the Viterbi equalization (VE) algorithm is used to estimate the maximum-likelihood receiving sequence. The Monte Carlo method is used to simulate the stretching of the received laser pulse and the maximum communication rate at a wavelength of 532 nm in Jerlov IB and Jerlov II water channels with communication distances of 80, 100, and 130 m, respectively. The high-data rate communication performance for the VE and hard-decision algorithms is compared. The simulation results show that the VE algorithm can be used to reduce the ISI by selecting the minimum error path. The trade-off between the high-data rate communication performance and minor bit-error rate performance loss makes VE a promising option for applications in long-distance, high-speed underwater laser communication systems.

  18. Estimation of Koc values for deuterated benzene, toluene, and ethylbenzene, and application to ground water contamination studies.

    PubMed

    Poulson, S R; Drever, J I; Colberg, P J

    1997-11-01

    Sorption partition coefficients between water and organic carbon (Koc) for deuterated benzene, toluene, and ethylbenzene have been estimated by measuring values of the octanol-water partition coefficient (Kow) and HPLC retention factors (k1), which correlate closely to values of Koc. Measured values of log Kow for non-deuterated and deuterated toluene are 2.77 (+/- 0.02) and 2.78 (+/- 0.04), respectively, indicating that within experimental error, log Koc for deuterated and non-deuterated toluene are the same. The HPLC method provides greater precision, and yields values of delta log Koc (= log Koc [deuterated]-log Koc [non-deuterated]) of -0.021 (+/- 0.001) for benzene, -0.028 (+/- 0.002) for toluene, and -0.035 (+/- 0.003) for ethylbenzene. The small values of delta log Koc demonstrates that deuterated compounds are excellent tracers for the hydrologic behavior of ground water contaminants.

  19. Balancing the stochastic description of uncertainties as a function of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.

    2016-12-01

    Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.

  20. Modelling hourly dissolved oxygen concentration (DO) using dynamic evolving neural-fuzzy inference system (DENFIS)-based approach: case study of Klamath River at Miller Island Boat Ramp, OR, USA.

    PubMed

    Heddam, Salim

    2014-01-01

    In this study, we present application of an artificial intelligence (AI) technique model called dynamic evolving neural-fuzzy inference system (DENFIS) based on an evolving clustering method (ECM), for modelling dissolved oxygen concentration in a river. To demonstrate the forecasting capability of DENFIS, a one year period from 1 January 2009 to 30 December 2009, of hourly experimental water quality data collected by the United States Geological Survey (USGS Station No: 420853121505500) station at Klamath River at Miller Island Boat Ramp, OR, USA, were used for model development. Two DENFIS-based models are presented and compared. The two DENFIS systems are: (1) offline-based system named DENFIS-OF, and (2) online-based system, named DENFIS-ON. The input variables used for the two models are water pH, temperature, specific conductance, and sensor depth. The performances of the models are evaluated using root mean square errors (RMSE), mean absolute error (MAE), Willmott index of agreement (d) and correlation coefficient (CC) statistics. The lowest root mean square error and highest correlation coefficient values were obtained with the DENFIS-ON method. The results obtained with DENFIS models are compared with linear (multiple linear regression, MLR) and nonlinear (multi-layer perceptron neural networks, MLPNN) methods. This study demonstrates that DENFIS-ON investigated herein outperforms all the proposed techniques for DO modelling.

  1. Calculation of distribution coefficients in the SAMPL5 challenge from atomic solvation parameters and surface areas.

    PubMed

    Santos-Martins, Diogo; Fernandes, Pedro Alexandrino; Ramos, Maria João

    2016-11-01

    In the context of SAMPL5, we submitted blind predictions of the cyclohexane/water distribution coefficient (D) for a series of 53 drug-like molecules. Our method is purely empirical and based on the additive contribution of each solute atom to the free energy of solvation in water and in cyclohexane. The contribution of each atom depends on the atom type and on the exposed surface area. Comparatively to similar methods in the literature, we used a very small set of atomic parameters: only 10 for solvation in water and 1 for solvation in cyclohexane. As a result, the method is protected from overfitting and the error in the blind predictions could be reasonably estimated. Moreover, this approach is fast: it takes only 0.5 s to predict the distribution coefficient for all 53 SAMPL5 compounds, allowing its application in virtual screening campaigns. The performance of our approach (submission 49) is modest but satisfactory in view of its efficiency: the root mean square error (RMSE) was 3.3 log D units for the 53 compounds, while the RMSE of the best performing method (using COSMO-RS) was 2.1 (submission 16). Our method is implemented as a Python script available at https://github.com/diogomart/SAMPL5-DC-surface-empirical .

  2. Mass imbalances in EPANET water-quality simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    EPANET is widely employed to simulate water quality in water distribution systems. However, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results, in general, only for small water-quality time steps; use of an adequately short time step may not be feasible. Overly long time steps can yield errors in concentrations and result in situations in which constituent mass is not conserved. Mass may not be conserved even when EPANET gives no errors or warnings. This paper explains how such imbalances can occur and provides examples of such cases; it also presents a preliminary event-driven approachmore » that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, to those obtained using the new approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations.« less

  3. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE PAGES

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-07-14

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  4. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  5. Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.

    PubMed

    Larsson, Elisabeth; Abrahamsson, Leif

    2003-05-01

    The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.

  6. Calculations of atmospheric transmittance in the 11 micrometer window for estimating skin temperature from VISSR infrared brightness temperatures

    NASA Technical Reports Server (NTRS)

    Chesters, D.

    1984-01-01

    An algorithm for calculating the atmospheric transmittance in the 10 to 20 micro m spectral band from a known temperature and dewpoint profile, and then using this transmittance to estimate the surface (skin) temperature from a VISSR observation in the 11 micro m window is presented. Parameterizations are drawn from the literature for computing the molecular absorption due to the water vapor continuum, water vapor lines, and carbon dioxide lines. The FORTRAN code is documented for this application, and the sensitivity of the derived skin temperature to variations in the model's parameters is calculated. The VISSR calibration uncertainties are identified as the largest potential source of error.

  7. Correction of stream quality trends for the effects of laboratory measurement bias

    USGS Publications Warehouse

    Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.

    1993-01-01

    We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.

  8. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  9. A Web Application for Validating and Disseminating Surface Energy Balance Evapotranspiration Estimates for Hydrologic Modeling Applications

    NASA Astrophysics Data System (ADS)

    Schneider, C. A.; Aggett, G. R.; Nevo, A.; Babel, N. C.; Hattendorf, M. J.

    2008-12-01

    The western United States face an increasing threat from drought - and the social, economic, and environmental impacts that come with it. The combination of diminished water supplies along with increasing demand for urban and other uses is rapidly depleting surface and ground water reserves traditionally allocated for agricultural use. Quantification of water consumptive use is increasingly important as water resources are placed under growing tension by increased users and interests. Scarce water supplies can be managed more efficiently through use of information and prediction tools accessible via the internet. METRIC (Mapping ET at high Resolution with Internalized Calibration) represents a maturing technology for deriving a remote sensing-based surface energy balance for estimating ET from the earth's surface. This technology has the potential to become widely adopted and used by water resources communities providing critical support to a host of water decision support tools. ET images created using METRIC or similar remote- sensing based processing systems could be routinely used as input to operational and planning models for water demand forecasting, reservoir operations, ground-water management, irrigation water supply planning, water rights regulation, and for the improvement, validation, and use of hydrological models. The ET modeling and subsequent validation and distribution of results via the web presented here provides a vehicle through which METRIC ET parameters can be made more accessible to hydrologic modelers. It will enable users of the data to assess the results of the spatially distributed ET modeling and compare with results from conventional ET estimation methods prior to assimilation in surface and ground water models. In addition, this ET-Server application will provide rapid and transparent access to the data enabling quantification of uncertainties due to errors in temporal sampling and METRIC modeling, while the GIS-based analytical tools will facilitate quality assessments associated with the selected spatio-temporal scale of interest.

  10. Localized landslide risk assessment with multi pass L band DInSAR analysis

    NASA Astrophysics Data System (ADS)

    Yun, HyeWon; Rack Kim, Jung; Lin, Shih-Yuan; Choi, YunSoo

    2014-05-01

    In terms of data availability and error correction, landslide forecasting by Differential Interferometric SAR (DInSAR) analysis is not easy task. Especially, the landslides by the anthropogenic construction activities frequently occurred in the localized cutting side of mountainous area. In such circumstances, it is difficult to attain sufficient enough accuracy because of the external factors inducing the error component in electromagnetic wave propagation. For instance, the local climate characteristics such as orographic effect and the proximity to water source can produce the significant anomalies in the water vapor distribution and consequently result in the error components of InSAR phase angle measurements. Moreover the high altitude parts of target area cause the stratified tropospheric delay error in DInSAR measurement. The other obstacle in DInSAR observation over the potential landside site is the vegetation canopy which causes the decorrelation of InSAR phase. Thus rather than C band sensor such as ENVISAT, ERS and RADARSAT, DInSAR analysis with L band ALOS PLASAR is more recommendable. Together with the introduction of L band DInSAR analysis, the improved DInSAR technique to cope all above obstacles is necessary. Thus we employed two approaches i.e. StaMPS/MTI (Stanford Method for Persistent Scatterers/Multi-Temporal InSAR, Hopper et al., 2007) which was newly developed for extracting the reliable deformation values through time series analysis and two pass DInSAR with the error term compensation based on the external weather information in this study. Since the water vapor observation from spaceborne radiometer is not feasible by the temporal gap in this case, the quantities from weather Research Forecasting (WRF) with 1 km spatial resolution was used to address the atmospheric phase error in two pass DInSAR analysis. Also it was observed that base DEM offset with time dependent perpendicular baselines of InSAR time series produce a significant error even in the advanced time series techniques such as StaMPS/MTI. We tried to compensate with the algorithmic base together with the usage of high resolution LIDAR DEM. The target area of this study is the eastern part of Korean peninsula centered. In there, the landslide originated by the geomorphic factors such as high sloped topography and localized torrential down pour is critical issue. The surface deformations from error corrected two pass DInSAR and StaMPS/MTI are crossly compared and validated with the landslide triggering factors such as vegetation, slope and geological properties. The study will be further extended for the application of future SAR sensors by incorporating the dynamic analysis of topography to implement practical landslide forecasting scheme.

  11. Can weighing lysimeter ET represent surrounding field ET well enough to test flux station measurements of daily and sub-daily ET?

    USDA-ARS?s Scientific Manuscript database

    Weighing lysimeters and neutron probes are two tools used to determine the change in soil water storage that is needed to solve for evapotranspiration (ET) using the soil water balance equation. Errors in the soil water balance due to errors in determination of precipitation and irrigation are commo...

  12. THE DETERMINATION OF BORON IN METHYL ALCOHOL DISTILLATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spicer, G.S.

    1959-07-01

    The distiilate is mixed with water, sodium hydroxide, and glycerol and evaporated to dryness. After ignition the boron is determined colorimetrically by its reaction with curcumin. This method is applicable to pure distillates of mthanol of less than 150 ml in volume. About 2 to 25 mu g is a suitable quantity of boron for determination, and the limit of detection is 0.05 mu g. In the above range the error should not exceed plus or minus 2%. (auth)

  13. The Snowmelt-Runoff Model (SRM) user's manual

    NASA Technical Reports Server (NTRS)

    Martinec, J.; Rango, A.; Major, E.

    1983-01-01

    A manual to provide a means by which a user may apply the snowmelt runoff model (SRM) unaided is presented. Model structure, conditions of application, and data requirements, including remote sensing, are described. Guidance is given for determining various model variables and parameters. Possible sources of error are discussed and conversion of snowmelt runoff model (SRM) from the simulation mode to the operational forecasting mode is explained. A computer program is presented for running SRM is easily adaptable to most systems used by water resources agencies.

  14. High-Frequency Observation of Water Spectrum and Its Application in Monitoring of Dynamic Variation of Suspended Materials in the Hangzhou Bay.

    PubMed

    Dai, Qian; Pan, De-lu; He, Xian-qiang; Zhu, Qian-kun; Gong, Fang; Huang, Hai-qing

    2015-11-01

    In situ measurement of water spectrum is the basis of the validation of the ocean color remote sensing. The traditional method to obtain the water spectrum is based on the shipboard measurement at limited stations, which is difficult to meet the requirement of validation of ocean color remote sensing in the highly dynamic coastal waters. To overcome this shortage, continuously observing systems of water spectrum have been developed in the world. However, so far, there are still few high-frequency observation systems of the water spectrum in coastal waters, especially in the highly turbid and high-dynamic waters. Here, we established a high-frequency water-spectrum observing system based on tower in the Hangzhou Bay. The system measures the water spectrum at a step of 3 minutes, which can fully match the satellite observation. In this paper, we primarily developed a data processing method for the tower-based high-frequency water spectrum data, to realize automatic judgment of clear sky, sun glint, platform shadow, and weak illumination, etc. , and verified the processing results. The results show that the normalized water-leaving radiance spectra obtained through tower observation have relatively high consistency with the shipboard measurement results, with correlation coefficient of more than 0. 99, and average relative error of 9.96%. In addition, the long-term observation capability of the tower-based high-frequency water-spectrum observing system was evaluated, and the results show that although the system has run for one year, the normalized water-leaving radiance obtained by this system have good consistency with the synchronously measurement by Portable spectrometer ASD in respect of spectral shape and value, with correlation coefficient of more than 0.90 and average relative error of 6.48%. Moreover, the water spectra from high-frequency observation by the system can be used to effectively monitor the rapid dynamic variation in concentration of suspended materials with tide. The tower-based high-frequency water-spectrum observing system provided rich in situ spectral data for the validation of ocean color remote sensing in turbid waters, especially for validation of the high temporal-resolution geostationary satellite ocean color remote sensing.

  15. A field technique for estimating aquifer parameters using flow log data

    USGS Publications Warehouse

    Paillet, Frederick L.

    2000-01-01

    A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that systematically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that symmetrically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.

  16. Application of the Markov Chain Monte Carlo method for snow water equivalent retrieval based on passive microwave measurements

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Vanderjagt, B. J.

    2015-12-01

    Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.

  17. Optimizing X-ray mirror thermal performance using matched profile cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin; Cocco, Daniele; Kelez, Nicholas

    2015-08-07

    To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less

  18. Cluster-Continuum Calculations of Hydration Free Energies of Anions and Group 12 Divalent Cations.

    PubMed

    Riccardi, Demian; Guo, Hao-Bo; Parks, Jerry M; Gu, Baohua; Liang, Liyuan; Smith, Jeremy C

    2013-01-08

    Understanding aqueous phase processes involving group 12 metal cations is relevant to both environmental and biological sciences. Here, quantum chemical methods and polarizable continuum models are used to compute the hydration free energies of a series of divalent group 12 metal cations (Zn(2+), Cd(2+), and Hg(2+)) together with Cu(2+) and the anions OH(-), SH(-), Cl(-), and F(-). A cluster-continuum method is employed, in which gas-phase clusters of the ion and explicit solvent molecules are immersed in a dielectric continuum. Two approaches to define the size of the solute-water cluster are compared, in which the number of explicit waters used is either held constant or determined variationally as that of the most favorable hydration free energy. Results obtained with various polarizable continuum models are also presented. Each leg of the relevant thermodynamic cycle is analyzed in detail to determine how different terms contribute to the observed mean signed error (MSE) and the standard deviation of the error (STDEV) between theory and experiment. The use of a constant number of water molecules for each set of ions is found to lead to predicted relative trends that benefit from error cancellation. Overall, the best results are obtained with MP2 and the Solvent Model D polarizable continuum model (SMD), with eight explicit water molecules for anions and 10 for the metal cations, yielding a STDEV of 2.3 kcal mol(-1) and MSE of 0.9 kcal mol(-1) between theoretical and experimental hydration free energies, which range from -72.4 kcal mol(-1) for SH(-) to -505.9 kcal mol(-1) for Cu(2+). Using B3PW91 with DFT-D3 dispersion corrections (B3PW91-D) and SMD yields a STDEV of 3.3 kcal mol(-1) and MSE of 1.6 kcal mol(-1), to which adding MP2 corrections from smaller divalent metal cation water molecule clusters yields very good agreement with the full MP2 results. Using B3PW91-D and SMD, with two explicit water molecules for anions and six for divalent metal cations, also yields reasonable agreement with experimental values, due in part to fortuitous error cancellation associated with the metal cations. Overall, the results indicate that the careful application of quantum chemical cluster-continuum methods provides valuable insight into aqueous ionic processes that depend on both local and long-range electrostatic interactions with the solvent.

  19. Application of the two-film model to the volatilization of acetone and t-butyl alcohol from water as a function of temperature

    USGS Publications Warehouse

    Rathbun, R.E.; Tai, D.Y.

    1988-01-01

    The two-film model is often used to describe the volatilization of organic substances from water. This model assumes uniformly mixed water and air phases separated by thin films of water and air in which mass transfer is by molecular diffusion. Mass-transfer coefficients for the films, commonly called film coefficients, are related through the Henry's law constant and the model equation to the overall mass-transfer coefficient for volatilization. The films are modeled as two resistances in series, resulting in additive resistances. The two-film model and the concept of additivity of resistances were applied to experimental data for acetone and t-butyl alcohol. Overall mass-transfer coefficients for the volatilization of acetone and t-butyl alcohol from water were measured in the laboratory in a stirred constant-temperature bath. Measurements were completed for six water temperatures, each at three water mixing conditions. Wind-speed was constant at about 0.1 meter per second for all experiments. Oxygen absorption coefficients were measured simultaneously with the measurement of the acetone and t-butyl alcohol mass-transfer coefficients. Gas-film coefficients for acetone, t-butyl alcohol, and water were determined by measuring the volatilization fluxes of the pure substances over a range of temperatures. Henry's law constants were estimated from data from the literature. The combination of high resistance in the gas film for solutes with low values of the Henry's law constants has not been studied previously. Calculation of the liquid-film coefficients for acetone and t-butyl alcohol from measured overall mass-transfer and gas-film coefficients, estimated Henry's law constants, and the two-film model equation resulted in physically unrealistic, negative liquid-film coefficients for most of the experiments at the medium and high water mixing conditions. An analysis of the two-film model equation showed that when the percentage resistance in the gas film is large and the gas-film resistance approaches the overall resistance in value, the calculated liquid-film coefficient becomes extremely sensitive to errors in the Henry's law constant. The negative coefficients were attributed to this sensitivity and to errors in the estimated Henry's law constants. Liquid-film coefficients for the absorption of oxygen were correlated with the stirrer Reynolds number and the Schmidt number. Application of this correlation with the experimental conditions and a molecular-diffusion coefficient adjustment resulted in values of the liquid-film coefficients for both acetone and t-butyl alcohol within the range expected for all three mixing conditions. Comparison of Henry's law constants calculated from these film coefficients and the experimental data with the constants calculated from literature data showed that the differences were small relative to the errors reported in the literature as typical for the measurement or estimation of Henry's law constants for hydrophilic compounds such as ketones and alcohols. Temperature dependence of the mass-transfer coefficients was expressed in two forms. The first, based on thermodynamics, assumed the coefficients varied as the exponential of the reciprocal absolute temperature. The second empirical approach assumed the coefficients varied as the exponential of the absolute temperature. Both of these forms predicted the temperature dependence of the experimental mass-transfer coefficients with little error for most of the water temperature range likely to be found in streams and rivers. Liquid-film and gas-film coefficients for acetone and t-butyl alcohol were similar in value. However, depending on water mixing conditions, overall mass-transfer coefficients for acetone were from two to four times larger than the coefficients for t-butyl alcohol. This difference in behavior of the coefficients resulted because the Henry's law constant for acetone was about three times larger than that of

  20. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  1. Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiala, David J; Mueller, Frank; Engelmann, Christian

    Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less

  2. Determination of immersion factors for radiance sensors in marine and inland waters: a semi-analytical approach using refractive index approximation

    NASA Astrophysics Data System (ADS)

    Dev, Pravin J.; Shanmugam, P.

    2016-05-01

    Underwater radiometers are generally calibrated in air using a standard source. The immersion factors are required for these radiometers to account for the change in the in-water measurements with respect to in-air due to the different refractive index of the medium. The immersion factors previously determined for RAMSES series of commercial radiometers manufactured by TriOS are applicable to clear oceanic waters. In typical inland and turbid productive coastal waters, these experimentally determined immersion factors yield significantly large errors in water-leaving radiances (Lw) and hence remote sensing reflectances (Rrs). To overcome this limitation, a semi-analytical method with based on the refractive index approximation is proposed in this study, with the aim of obtaining reliable Lw and Rrs from RAMSES radiometers for turbid and productive waters within coastal and inland water environments. We also briefly show the variation of pure water immersion factors (Ifw) and newly derived If on Lw and Rrs for clear and turbid waters. The remnant problems other than the immersion factor coefficients such as transmission, air-water and water-air Fresnel's reflectances are also discussed.

  3. Fault Tolerance for VLSI Multicomputers

    DTIC Science & Technology

    1985-08-01

    that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error

  4. Observations of cloud liquid water path over oceans: Optical and microwave remote sensing methods

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Rossow, William B.

    1994-01-01

    Published estimates of cloud liquid water path (LWP) from satellite-measured microwave radiation show little agreement, even about the relative magnitudes of LWP in the tropics and midlatitudes. To understand these differences and to obtain more reliable estimate, optical and microwave LWP retrieval methods are compared using the International Satellite Cloud Climatology Project (ISCCP) and special sensor microwave/imager (SSM/I) data. Errors in microwave LWP retrieval associated with uncertainties in surface, atmosphere, and cloud properties are assessed. Sea surface temperature may not produce great LWP errors, if accurate contemporaneous measurements are used in the retrieval. An uncertainty of estimated near-surface wind speed as high as 2 m/s produces uncertainty in LWP of about 5 mg/sq cm. Cloud liquid water temperature has only a small effect on LWP retrievals (rms errors less than 2 mg/sq cm), if errors in the temperature are less than 5 C; however, such errors can produce spurious variations of LWP with latitude and season. Errors in atmospheric column water vapor (CWV) are strongly coupled with errors in LWP (for some retrieval methods) causing errors as large as 30 mg/sq cm. Because microwave radiation is much less sensitive to clouds with small LWP (less than 7 mg/sq cm) than visible wavelength radiation, the microwave results are very sensitive to the process used to separate clear and cloudy conditions. Different cloud detection sensitivities in different microwave retrieval methods bias estimated LWP values. Comparing ISCCP and SSM/I LWPs, we find that the two estimated values are consistent in global, zonal, and regional means for warm, nonprecipitating clouds, which have average LWP values of about 5 mg/sq cm and occur much more frequently than precipitating clouds. Ice water path (IWP) can be roughly estimated from the differences between ISCCP total water path and SSM/I LWP for cold, nonprecipitating clouds. IWP in the winter hemisphere is about 3 times the LWP but only half the LWP in the summer hemisphere. Precipitating clouds contribute significantly to monthly, zonal mean LWP values determined from microwave, especially in the intertropical convergence zone (ITCZ), because they have almost 10 times the liquid water (cloud plus precipitation) of nonprecipitating clouds on average. There are significant differences among microwave LWP estimates associated with the treatment of precipitating clouds.

  5. High bandwidth underwater optical communication.

    PubMed

    Hanson, Frank; Radic, Stojan

    2008-01-10

    We report error-free underwater optical transmission measurements at 1 Gbit/s (10(9) bits/s) over a 2 m path in a laboratory water pipe with up to 36 dB of extinction. The source at 532 nm was derived from a 1064 nm continuous-wave laser diode that was intensity modulated, amplified, and frequency doubled in periodically poled lithium niobate. Measurements were made over a range of extinction by the addition of a Mg(OH)(2) and Al(OH)(3) suspension to the water path, and we were not able to observe any evidence of temporal pulse broadening. Results of Monte Carlo simulations over ocean water paths of several tens of meters indicate that optical communication data rates >1 Gbit/s can be supported and are compatible with high-capacity data transfer applications that require no physical contact.

  6. Photoacoustic infrared spectroscopy for conducting gas tracer tests and measuring water saturations in landfills

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Yoojin; Han, Byunghyun; Mostafid, M. Erfan

    2012-02-15

    Highlights: Black-Right-Pointing-Pointer Photoacoustic infrared spectroscopy tested for measuring tracer gas in landfills. Black-Right-Pointing-Pointer Measurement errors for tracer gases were 1-3% in landfill gas. Black-Right-Pointing-Pointer Background signals from landfill gas result in elevated limits of detection. Black-Right-Pointing-Pointer Technique is much less expensive and easier to use than GC. - Abstract: Gas tracer tests can be used to determine gas flow patterns within landfills, quantify volatile contaminant residence time, and measure water within refuse. While gas chromatography (GC) has been traditionally used to analyze gas tracers in refuse, photoacoustic spectroscopy (PAS) might allow real-time measurements with reduced personnel costs and greater mobilitymore » and ease of use. Laboratory and field experiments were conducted to evaluate the efficacy of PAS for conducting gas tracer tests in landfills. Two tracer gases, difluoromethane (DFM) and sulfur hexafluoride (SF{sub 6}), were measured with a commercial PAS instrument. Relative measurement errors were invariant with tracer concentration but influenced by background gas: errors were 1-3% in landfill gas but 4-5% in air. Two partitioning gas tracer tests were conducted in an aerobic landfill, and limits of detection (LODs) were 3-4 times larger for DFM with PAS versus GC due to temporal changes in background signals. While higher LODs can be compensated by injecting larger tracer mass, changes in background signals increased the uncertainty in measured water saturations by up to 25% over comparable GC methods. PAS has distinct advantages over GC with respect to personnel costs and ease of use, although for field applications GC analyses of select samples are recommended to quantify instrument interferences.« less

  7. Merging gauge and satellite rainfall with specification of associated uncertainty across Australia

    NASA Astrophysics Data System (ADS)

    Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish

    2013-08-01

    Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.

  8. Application of Consider Covariance to the Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lundberg, John B.

    1996-01-01

    The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.

  9. Stream-flow forecasting using extreme learning machines: A case study in a semi-arid region in Iraq

    NASA Astrophysics Data System (ADS)

    Yaseen, Zaher Mundher; Jaafar, Othman; Deo, Ravinesh C.; Kisi, Ozgur; Adamowski, Jan; Quilty, John; El-Shafie, Ahmed

    2016-11-01

    Monthly stream-flow forecasting can yield important information for hydrological applications including sustainable design of rural and urban water management systems, optimization of water resource allocations, water use, pricing and water quality assessment, and agriculture and irrigation operations. The motivation for exploring and developing expert predictive models is an ongoing endeavor for hydrological applications. In this study, the potential of a relatively new data-driven method, namely the extreme learning machine (ELM) method, was explored for forecasting monthly stream-flow discharge rates in the Tigris River, Iraq. The ELM algorithm is a single-layer feedforward neural network (SLFNs) which randomly selects the input weights, hidden layer biases and analytically determines the output weights of the SLFNs. Based on the partial autocorrelation functions of historical stream-flow data, a set of five input combinations with lagged stream-flow values are employed to establish the best forecasting model. A comparative investigation is conducted to evaluate the performance of the ELM compared to other data-driven models: support vector regression (SVR) and generalized regression neural network (GRNN). The forecasting metrics defined as the correlation coefficient (r), Nash-Sutcliffe efficiency (ENS), Willmott's Index (WI), root-mean-square error (RMSE) and mean absolute error (MAE) computed between the observed and forecasted stream-flow data are employed to assess the ELM model's effectiveness. The results revealed that the ELM model outperformed the SVR and the GRNN models across a number of statistical measures. In quantitative terms, superiority of ELM over SVR and GRNN models was exhibited by ENS = 0.578, 0.378 and 0.144, r = 0.799, 0.761 and 0.468 and WI = 0.853, 0.802 and 0.689, respectively and the ELM model attained lower RMSE value by approximately 21.3% (relative to SVR) and by approximately 44.7% (relative to GRNN). Based on the findings of this study, several recommendations were suggested for further exploration of the ELM model in hydrological forecasting problems.

  10. Experimental findings on the underwater measurements uncertainty of speed of sound and the alignment system

    NASA Astrophysics Data System (ADS)

    Santos, T. Q.; Alvarenga, A. V.; Oliveira, D. P.; Mayworm, R. C.; Souza, R. M.; Costa-Félix, R. P. B.

    2016-07-01

    Speed of sound is an important quantity to characterize reference materials for ultrasonic applications, for instance. The alignment between the transducer and the test body is an key activity in order to perform reliable and consistent measurement. The aim of this work is to evaluate the influence of the alignment system to the expanded uncertainty of such measurement. A stainless steel cylinder was previously calibrated on an out of water system typically used for calibration of non-destructive blocks. Afterwards, the cylinder was calibrated underwater with two distinct alignment system: fixed and mobile. The values were statistically compared to the out-of-water measurement, considered the golden standard for such application. For both alignment systems, the normalized error was less than 0.8, leading to conclude that the both measurement system (under and out-of-water) do not diverge significantly. The gold standard uncertainty was 2.7 m-s-1, whilst the fixed underwater system resulted in 13 m-s-1, and the mobile alignment system achieved 6.6 m-s-1. After the validation of the underwater system for speed of sound measurement, it will be applied to certify Encapsulated Tissue Mimicking Material as a reference material for biotechnology application.

  11. Estimating irrigation water demand in the Moroccan Drâa Valley using contingent valuation.

    PubMed

    Storm, Hugo; Heckelei, Thomas; Heidecke, Claudia

    2011-10-01

    Irrigation water management is crucial for agricultural production and livelihood security in Morocco as in many other parts of the world. For the implementation of an effective water management, knowledge about farmers' demand for irrigation water is crucial to assess reactions to water pricing policy, to establish a cost-benefit analysis of water supply investments or to determine the optimal water allocation between different users. Previously used econometric methods providing this information often have prohibitive data requirements. In this paper, the Contingent Valuation Method (CVM) is adjusted to derive a demand function for irrigation water along farmers' willingness to pay for one additional unit of surface water or groundwater. An application in the Middle Drâa Valley in Morocco shows that the method provides reasonable results in an environment with limited data availability. For analysing the censored survey data, the Least Absolute Deviation estimator was found to be a more suitable alternative to the Tobit model as errors are heteroscedastic and non-normally distributed. The adjusted CVM to derive demand functions is especially attractive for water scarce countries under limited data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Simulating the fate of water in field soil crop environment

    NASA Astrophysics Data System (ADS)

    Cameira, M. R.; Fernando, R. M.; Ahuja, L.; Pereira, L.

    2005-12-01

    This paper presents an evaluation of the Root Zone Water Quality Model(RZWQM) for assessing the fate of water in the soil-crop environment at the field scale under the particular conditions of a Mediterranean region. The RZWQM model is a one-dimensional dual porosity model that allows flow in macropores. It integrates the physical, biological and chemical processes occurring in the root zone, allowing the simulation of a wide spectrum of agricultural management practices. This study involved the evaluation of the soil, hydrologic and crop development sub-models within the RZWQM for two distinct agricultural systems, one consisting of a grain corn planted in a silty loam soil, irrigated by level basins and the other a forage corn planted in a sandy soil, irrigated by sprinklers. Evaluation was performed at two distinct levels. At the first level the model capability to fit the measured data was analyzed (calibration). At the second level the model's capability to extrapolate and predict the system behavior for conditions different than those used when fitting the model was assessed (validation). In a subsequent paper the same type of evaluation is presented for the nitrogen transformation and transport model. At the first level a change in the crop evapotranspiration (ETc) formulation was introduced, based upon the definition of the effective leaf area, resulting in a 51% decrease in the root mean square error of the ETc simulations. As a result the simulation of the root water uptake was greatly improved. A new bottom boundary condition was implemented to account for the presence of a shallow water table. This improved the simulation of the water table depths and consequently the soil water evolution within the root zone. The soil hydraulic parameters and the crop variety specific parameters were calibrated in order to minimize the simulation errors of soil water and crop development. At the second level crop yield was predicted with an error of 1.1 and 2.8% for grain and forage corn, respectively. Soil water was predicted with an efficiency ranging from 50 to 95% for the silty loam soil and between 56 and 72% for the sandy soil. The purposed calibration procedure allowed the model to predict crop development, yield and the water balance terms, with accuracy that is acceptable in practical applications for complex and spatially variable field conditions. An iterative method was required to account for the strong interaction between the different model components, based upon detailed experimental data on soils and crops.

  13. Thermal effects on electronic properties of CO/Pt(111) in water.

    PubMed

    Duan, Sai; Xu, Xin; Luo, Yi; Hermansson, Kersti; Tian, Zhong-Qun

    2013-08-28

    Structure and adsorption energy of carbon monoxide molecules adsorbed on the Pt(111) surfaces with various CO coverages in water as well as work function of the whole systems at room temperature of 298 K were studied by means of a hybrid method that combines classical molecular dynamics and density functional theory. We found that when the coverage of CO is around half monolayer, i.e. 50%, there is no obvious peak of the oxygen density profile appearing in the first water layer. This result reveals that, in this case, the external force applied to water molecules from the CO/Pt(111) surface almost vanishes as a result of the competitive adsorption between CO and water molecules on the Pt(111) surface. This coverage is also the critical point of the wetting/non-wetting conditions for the CO/Pt(111) surface. Averaged work function and adsorption energy from current simulations are consistent with those of previous studies, which show that thermal average is required for direct comparisons between theoretical predictions and experimental measurements. Meanwhile, the statistical behaviors of work function and adsorption energy at room temperature have also been calculated. The standard errors of the calculated work function for the water-CO/Pt(111) interfaces are around 0.6 eV at all CO coverages, while the standard error decreases from 1.29 to 0.05 eV as the CO coverage increases from 4% to 100% for the calculated adsorption energy. Moreover, the critical points for these electronic properties are the same as those for the wetting/non-wetting conditions. These findings provide a better understanding about the interfacial structure under specific adsorption conditions, which can have important applications on the structure of electric double layers and therefore offer a useful perspective for the design of the electrochemical catalysts.

  14. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank

    2017-07-01

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  15. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information; with a section on theory and application of generalized least squares

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1987-01-01

    This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.

  16. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel.

    PubMed

    Fonseca, Gabriel P; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R; Lutgens, Ludy; Vanneste, Ben G L; Voncken, Robert; Van Limbergen, Evert J; Reniers, Brigitte; Verhaegen, Frank

    2017-07-07

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192 Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  17. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  18. Application of digital profile modeling techniques to ground-water solute transport at Barstow, California

    USGS Publications Warehouse

    Robson, Stanley G.

    1978-01-01

    This study investigated the use of a two-dimensional profile-oriented water-quality model for the simulation of head and water-quality changes through the saturated thickness of an aquifer. The profile model is able to simulate confined or unconfined aquifers with nonhomogeneous anisotropic hydraulic conductivity, nonhomogeneous specific storage and porosity, and nonuniform saturated thickness. An aquifer may be simulated under either steady or nonsteady flow conditions provided that the ground-water flow path along which the longitudinal axis of the model is oriented does not move in the aquifer during the simulation time period. The profile model parameters are more difficult to quantify than are the corresponding parameters for an areal-oriented water-fluality model. However, the sensitivity of the profile model to the parameters may be such that the normal error of parameter estimation will not preclude obtaining acceptable model results. Although the profile model has the advantage of being able to simulate vertical flow and water-quality changes in a single- or multiple-aquifer system, the types of problems to which it can be applied is limited by the requirements that (1) the ground-water flow path remain oriented along the longitudinal axis of the model and (2) any subsequent hydrologic factors to be evaluated using the model must be located along the land-surface trace of the model. Simulation of hypothetical ground-water management practices indicates that the profile model is applicable to problem-oriented studies and can provide quantitative results applicable to a variety of management practices. In particular, simulations of the movement and dissolved-solids concentration of a zone of degraded ground-water quality near Barstow, Calif., indicate that halting subsurface disposal of treated sewage effluent in conjunction with pumping a line of fully penetrating wells would be an effective means of controlling the movement of degraded ground water.

  19. The effect of the dynamic wet troposphere on radio interferometric measurements

    NASA Technical Reports Server (NTRS)

    Treuhaft, R. N.; Lanyi, G. E.

    1987-01-01

    A statistical model of water vapor fluctuations is used to describe the effect of the dynamic wet troposphere on radio interferometric measurements. It is assumed that the spatial structure of refractivity is approximated by Kolmogorov turbulence theory, and that the temporal fluctuations are caused by spatial patterns moved over a site by the wind, and these assumptions are examined for the VLBI delay and delay rate observables. The results suggest that the delay rate measurement error is usually dominated by water vapor fluctuations, and water vapor induced VLBI parameter errors and correlations are determined as a function of the delay observable errors. A method is proposed for including the water vapor fluctuations in the parameter estimation method to obtain improved parameter estimates and parameter covariances.

  20. Inversion of In Situ Light Absorption and Attenuation Measurements to Estimate Constituent Concentrations in Optically Complex Shelf Seas

    NASA Astrophysics Data System (ADS)

    Ramírez-Pérez, M.; Twardowski, M.; Trees, C.; Piera, J.; McKee, D.

    2018-01-01

    A deconvolution approach is presented to use spectral light absorption and attenuation data to estimate the concentration of the major nonwater compounds in complex shelf sea waters. The inversion procedure requires knowledge of local material-specific inherent optical properties (SIOPs) which are determined from natural samples using a bio-optical model that differentiates between Case I and Case II waters and uses least squares linear regression analysis to provide optimal SIOP values. A synthetic data set is used to demonstrate that the approach is fundamentally consistent and to test the sensitivity to injection of controlled levels of artificial noise into the input data. Self-consistency of the approach is further demonstrated by application to field data collected in the Ligurian Sea, with chlorophyll (Chl), the nonbiogenic component of total suspended solids (TSSnd), and colored dissolved organic material (CDOM) retrieved with RMSE of 0.61 mg m-3, 0.35 g m-3, and 0.02 m-1, respectively. The utility of the approach is finally demonstrated by application to depth profiles of in situ absorption and attenuation data resulting in profiles of optically significant constituents with associated error bar estimates. The advantages of this procedure lie in the simple input requirements, the avoidance of error amplification, full exploitation of the available spectral information from both absorption and attenuation channels, and the reasonably successful retrieval of constituent concentrations in an optically complex shelf sea.

  1. Measures of rowing performance.

    PubMed

    Smith, T Brett; Hopkins, Will G

    2012-04-01

    Accurate measures of performance are important for assessing competitive athletes in practi~al and research settings. We present here a review of rowing performance measures, focusing on the errors in these measures and the implications for testing rowers. The yardstick for assessing error in a performance measure is the random variation (typical or standard error of measurement) in an elite athlete's competitive performance from race to race: ∼1.0% for time in 2000 m rowing events. There has been little research interest in on-water time trials for assessing rowing performance, owing to logistic difficulties and environmental perturbations in performance time with such tests. Mobile ergometry via instrumented oars or rowlocks should reduce these problems, but the associated errors have not yet been reported. Measurement of boat speed to monitor on-water training performance is common; one device based on global positioning system (GPS) technology contributes negligible extra random error (0.2%) in speed measured over 2000 m, but extra error is substantial (1-10%) with other GPS devices or with an impeller, especially over shorter distances. The problems with on-water testing have led to widespread use of the Concept II rowing ergometer. The standard error of the estimate of on-water 2000 m time predicted by 2000 m ergometer performance was 2.6% and 7.2% in two studies, reflecting different effects of skill, body mass and environment in on-water versus ergometer performance. However, well trained rowers have a typical error in performance time of only ∼0.5% between repeated 2000 m time trials on this ergometer, so such trials are suitable for tracking changes in physiological performance and factors affecting it. Many researchers have used the 2000 m ergometer performance time as a criterion to identify other predictors of rowing performance. Standard errors of the estimate vary widely between studies even for the same predictor, but the lowest errors (~1-2%) have been observed for peak power output in an incremental test, some measures of lactate threshold and measures of 30-second all-out power. Some of these measures also have typical error between repeated tests suitably low for tracking changes. Combining measures via multiple linear regression needs further investigation. In summary, measurement of boat speed, especially with a good GPS device, has adequate precision for monitoring training performance, but adjustment for environmental effects needs to be investigated. Time trials on the Concept II ergometer provide accurate estimates of a rower's physiological ability to output power, and some submaximal and brief maximal ergometer performance measures can be used frequently to monitor changes in this ability. On-water performance measured via instrumented skiffs that determine individual power output may eventually surpass measures derived from the Concept II.

  2. Upstream water resource management to address downstream pollution concerns: A policy framework with application to the Nakdong River basin in South Korea

    NASA Astrophysics Data System (ADS)

    Yoon, Taeyeon; Rhodes, Charles; Shah, Farhed A.

    2015-02-01

    An empirical framework for assisting with water quality management is proposed that relies on open-source hydrologic data. Such data are measured periodically at fixed water stations and commonly available in time-series form. To fully exploit the data, we suggest that observations from multiple stations should be combined into a single long-panel data set, and an econometric model developed to estimate upstream management effects on downstream water quality. Selection of the model's functional form and explanatory variables would be informed by rating curves, and idiosyncrasies across and within stations handled in an error term by testing contemporary correlation, serial correlation, and heteroskedasticity. Our proposed approach is illustrated with an application to the Nakdong River basin in South Korea. Three alternative policies to achieve downstream BOD level targets are evaluated: upstream water treatment, greater dam discharge, and development of a new water source. Upstream water treatment directly cuts off incoming pollutants, thereby presenting the smallest variation in its downstream effects on BOD levels. Treatment is advantageous when reliability of water quality is a primary concern. Dam discharge is a flexible tool, and may be used strategically during a low-flow season. We consider development of a new water corridor from an extant dam as our third policy option. This turns out to be the most cost-effective way for securing lower BOD levels in the downstream target city. Even though we consider a relatively simple watershed to illustrate the usefulness of our approach, it can be adapted easily to analyze more complex upstream-downstream issues.

  3. Impact of errors in short wave radiation and its attenuation on modeled upper ocean heat content

    DTIC Science & Technology

    Photosynthetically available radiation (PAR) and its attenuation with the depth represent a forcing (source) term in the governing equation for the...and vertical attenuation of PAR have on the upper ocean model heat content. In the Monterey Bay area, we show that with a decrease in water clarity...attenuation coefficient. For Jerlov’s type IA water (attenuation coefficient is 0.049 m1), the relative error in surface PAR introduces an error

  4. Calibration and temperature correction of heat dissipation matric potential sensors

    USGS Publications Warehouse

    Flint, A.L.; Campbell, G.S.; Ellett, K.M.; Calissendorff, C.

    2002-01-01

    This paper describes how heat dissipation sensors, used to measure soil water matric potential, were analyzed to develop a normalized calibration equation and a temperature correction method. Inference of soil matric potential depends on a correlation between the variable thermal conductance of the sensor's porous ceramic and matric poten-tial. Although this correlation varies among sensors, we demonstrate a normalizing procedure that produces a single calibration relationship. Using sensors from three sources and different calibration methods, the normalized calibration resulted in a mean absolute error of 23% over a matric potential range of -0.01 to -35 MPa. Because the thermal conductivity of variably saturated porous media is temperature dependent, a temperature correction is required for application of heat dissipation sensors in field soils. A temperature correction procedure is outlined that reduces temperature dependent errors by 10 times, which reduces the matric potential measurement errors by more than 30%. The temperature dependence is well described by a thermal conductivity model that allows for the correction of measurements at any temperature to measurements at the calibration temperature.

  5. Comparison of TRMM 2A25 Products Version 6 and Version 7 with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstetter, Pierre-Emmanuel; Hong, Y.; Gourley, J. J.; Schwaller, M.; Petersen, W; Zhang, J.

    2012-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving spaceborne passive and active microwave measurements for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem was addressed in a previous paper by comparison of 2A25 version 6 (V6) product with reference values derived from NOAA/NSSL's ground radar-based National Mosaic and QPE system (NMQ/Q2). The primary contribution of this study is to compare the new 2A25 version 7 (V7) products that were recently released as a replacement of V6. This new version is considered superior over land areas. Several aspects of the two versions are compared and quantified including rainfall rate distributions, systematic biases, and random errors. All analyses indicate V7 is an improvement over V6.

  6. Geospatial distribution modeling and determining suitability of groundwater quality for irrigation purpose using geospatial methods and water quality index (WQI) in Northern Ethiopia

    NASA Astrophysics Data System (ADS)

    Gidey, Amanuel

    2018-06-01

    Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.

  7. In-cell measurements of smoke backscattering coefficients using a CO2 laser system for application to lidar-dial forest fire detection

    NASA Astrophysics Data System (ADS)

    Bellecci, Carlo; Gaudio, Pasquale; Gelfusa, Michela; Lo Feudo, Teresa; Murari, Andrea; Richetta, Maria; de Leo, Leonerdo

    2010-12-01

    In the lidar-dial method, the amount of the water vapor present in the smoke of the vegetable fuel is detected to reduce the number of false alarms. We report the measurements of the smoke backscattering coefficients for the CO2 laser lines 10R20 and 10R18 as determined in an absorption cell for two different vegetable fuels (eucalyptus and conifer). These experimental backscattering coefficients enable us to determine the error to be associated to the water vapor measurements when the traditional first-order approximation is assumed. We find that this first-order approximation is valid for combustion rates as low as 100 g/s.

  8. Error Correcting Codes I. Applications of Elementary Algebra to Information Theory. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 346.

    ERIC Educational Resources Information Center

    Rice, Bart F.; Wilde, Carroll O.

    It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…

  9. Uses and biases of volunteer water quality data

    USGS Publications Warehouse

    Loperfido, J.V.; Beyer, P.; Just, C.L.; Schnoor, J.L.

    2010-01-01

    State water quality monitoring has been augmented by volunteer monitoring programs throughout the United States. Although a significant effort has been put forth by volunteers, questions remain as to whether volunteer data are accurate and can be used by regulators. In this study, typical volunteer water quality measurements from laboratory and environmental samples in Iowa were analyzed for error and bias. Volunteer measurements of nitrate+nitrite were significantly lower (about 2-fold) than concentrations determined via standard methods in both laboratory-prepared and environmental samples. Total reactive phosphorus concentrations analyzed by volunteers were similar to measurements determined via standard methods in laboratory-prepared samples and environmental samples, but were statistically lower than the actual concentration in four of the five laboratory-prepared samples. Volunteer water quality measurements were successful in identifying and classifying most of the waters which violate United States Environmental Protection Agency recommended water quality criteria for total nitrogen (66%) and for total phosphorus (52%) with the accuracy improving when accounting for error and biases in the volunteer data. An understanding of the error and bias in volunteer water quality measurements can allow regulators to incorporate volunteer water quality data into total maximum daily load planning or state water quality reporting. ?? 2010 American Chemical Society.

  10. Verification of a national water data base using a geographic information system

    USGS Publications Warehouse

    Harrison, H.E.

    1994-01-01

    The National Water Data Exchange (NAWDEX) was developed to assist users of water-resource data in the identification, location, and acquisition of data. The Master Water Data Index (MWDI) of NAWDEX currently indexes the data collected by 423 organizations from nearly 500,000 sites throughout the United Stales. The utilization of new computer technologies permit the distribution of the MWDI to the public on compact disc. In addition, geographic information systems (GIS) are now available that can store and analyze these data in a spatial format. These recent innovations could increase access and add new capabilities to the MWDI. Before either of these technologies could be employed, however, a quality-assurance check of the MWDI needed to be performed. The MWDI resides on a mainframe computer in a tabular format. It was copied onto a workstation and converted to a GIS format. The GIS was used to identify errors in the MWDI and produce reports that summarized these errors. The summary reports were sent to the responsible contributing agencies along with instructions for submitting their corrections to the NAWDEX Program Office. The MWDI administrator received reports that summarized all of the errors identified. Of the 494,997 sites checked, 93,440 sites had at least one error (18.9 percent error rate).

  11. Photoacoustic infrared spectroscopy for conducting gas tracer tests and measuring water saturations in landfills.

    PubMed

    Jung, Yoojin; Han, Byunghyun; Mostafid, M Erfan; Chiu, Pei; Yazdani, Ramin; Imhoff, Paul T

    2012-02-01

    Gas tracer tests can be used to determine gas flow patterns within landfills, quantify volatile contaminant residence time, and measure water within refuse. While gas chromatography (GC) has been traditionally used to analyze gas tracers in refuse, photoacoustic spectroscopy (PAS) might allow real-time measurements with reduced personnel costs and greater mobility and ease of use. Laboratory and field experiments were conducted to evaluate the efficacy of PAS for conducting gas tracer tests in landfills. Two tracer gases, difluoromethane (DFM) and sulfur hexafluoride (SF(6)), were measured with a commercial PAS instrument. Relative measurement errors were invariant with tracer concentration but influenced by background gas: errors were 1-3% in landfill gas but 4-5% in air. Two partitioning gas tracer tests were conducted in an aerobic landfill, and limits of detection (LODs) were 3-4 times larger for DFM with PAS versus GC due to temporal changes in background signals. While higher LODs can be compensated by injecting larger tracer mass, changes in background signals increased the uncertainty in measured water saturations by up to 25% over comparable GC methods. PAS has distinct advantages over GC with respect to personnel costs and ease of use, although for field applications GC analyses of select samples are recommended to quantify instrument interferences. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Hydrological modelling of the Chaohe Basin in China: Statistical model formulation and Bayesian inference

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Reichert, Peter; Abbaspour, Karim C.; Yang, Hong

    2007-07-01

    SummaryCalibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box-Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)-(iv) mentioned above.

  13. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  14. Parameterization of clear-sky surface irradiance and its implications for estimation of aerosol direct radiative effect and aerosol optical depth

    PubMed Central

    Xia, Xiangao

    2015-01-01

    Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310

  15. Nonlinear derating of high-intensity focused ultrasound beams using Gaussian modal sums.

    PubMed

    Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Soneson, Joshua E; Myers, Matthew R

    2013-11-01

    A method is introduced for using measurements made in water of the nonlinear acoustic pressure field produced by a high-intensity focused ultrasound transducer to compute the acoustic pressure and temperature rise in a tissue medium. The acoustic pressure harmonics generated by nonlinear propagation are represented as a sum of modes having a Gaussian functional dependence in the radial direction. While the method is derived in the context of Gaussian beams, final results are applicable to general transducer profiles. The focal acoustic pressure is obtained by solving an evolution equation in the axial variable. The nonlinear term in the evolution equation for tissue is modeled using modal amplitudes measured in water and suitably reduced using a combination of "source derating" (experiments in water performed at a lower source acoustic pressure than in tissue) and "endpoint derating" (amplitudes reduced at the target location). Numerical experiments showed that, with proper combinations of source derating and endpoint derating, direct simulations of acoustic pressure and temperature in tissue could be reproduced by derating within 5% error. Advantages of the derating approach presented include applicability over a wide range of gains, ease of computation (a single numerical quadrature is required), and readily obtained temperature estimates from the water measurements.

  16. An Algorithm for Retrieving Land Surface Temperatures Using VIIRS Data in Combination with Multi-Sensors

    PubMed Central

    Xia, Lang; Mao, Kebiao; Ma, Ying; Zhao, Fen; Jiang, Lipeng; Shen, Xinyi; Qin, Zhihao

    2014-01-01

    A practical algorithm was proposed to retrieve land surface temperature (LST) from Visible Infrared Imager Radiometer Suite (VIIRS) data in mid-latitude regions. The key parameter transmittance is generally computed from water vapor content, while water vapor channel is absent in VIIRS data. In order to overcome this shortcoming, the water vapor content was obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) data in this study. The analyses on the estimation errors of vapor content and emissivity indicate that when the water vapor errors are within the range of ±0.5 g/cm2, the mean retrieval error of the present algorithm is 0.634 K; while the land surface emissivity errors range from −0.005 to +0.005, the mean retrieval error is less than 1.0 K. Validation with the standard atmospheric simulation shows the average LST retrieval error for the twenty-three land types is 0.734 K, with a standard deviation value of 0.575 K. The comparison between the ground station LST data indicates the retrieval mean accuracy is −0.395 K, and the standard deviation value is 1.490 K in the regions with vegetation and water cover. Besides, the retrieval results of the test data have also been compared with the results measured by the National Oceanic and Atmospheric Administration (NOAA) VIIRS LST products, and the results indicate that 82.63% of the difference values are within the range of −1 to 1 K, and 17.37% of the difference values are within the range of ±2 to ±1 K. In a conclusion, with the advantages of multi-sensors taken fully exploited, more accurate results can be achieved in the retrieval of land surface temperature. PMID:25397919

  17. Effects of water-emission anisotropy on multispectral remote sensing at thermal wavelengths of ocean temperature and of cirrus clouds

    NASA Technical Reports Server (NTRS)

    Otterman, J.; Susskind, J.; Dalu, G.; Kratz, D.; Goldberg, I. L.

    1992-01-01

    The impact of water-emission anisotropy on remotedly sensed long-wave data has been studied. Water emission is formulated from a calm body for a facile computation of radiative transfer in the atmosphere. The error stemming from the blackbody assumption are calculated for cases of a purely absorbing or a purely scattering atmosphere taking the optical properties of the atmosphere as known. For an absorbing atmosphere, the errors in the sea-surface temperature (SST) are found to be always reduced and be the same whether measurements are made from space or at any level of the atmosphere. The inferred optical thickness tau of an absorbing layer can be in error under the blackbody assumption by a delta tau of 0.01-0.08, while the inferred optical thickness of a scattering layer can be in error by a larger amount, delta tau of 0.03-0.13. It is concluded that the error delta tau depends only weakly on the actual optical thickness and the viewing angle, but is rather sensitive to the wavelength of the measurement.

  18. Discovery of Spatio-Temporal Relationships in GRACE, GPS Time Series, and Groundwater Data Using Voronoi Visualizations

    NASA Astrophysics Data System (ADS)

    Rude, C. M.; Li, J. D.; Rongier, G.; Gowanlock, M.; Herring, T.; Pankratius, V.

    2017-12-01

    We introduce a data exploration and visualization tool to facilitate the discovery of correlations across geospatial data sets in a computer-aided discovery system. Our approach is based on adaptive Voronoi tessellation maps that can handle spotty data availability, varying sensor density, and resolution at different scales in the same visualization product. Successful applications exploring spatio-temporal relationships are demonstrated on data sets from the Gravity Recovery and Climate Experiment (GRACE), GPS time series from the Plate Boundary Observatory, and groundwater well depth data from USGS, with the objective of understanding the Earth's surface response to changes in terrestrial water storage. Our results reveal that vertical positions in the majority of GPS stations are negatively correlated with terrestrial water storage from GRACE. This is expected if the changes are due to terrestrial water loading deforming the ground. Our application also identifies outliers that warrant further investigation, such as sites with low correlation or positive correlation due to poroelastic expansion. Other analyses reveal that GRACE correlates positively with water levels from wells, but the removal of GRACE non-groundwater components (canopy water, soil moisture, and snow accumulation) using model data from the Global Land Data Assimilation System unexpectedly lowers the correlations, effects which may be related to modeling accuracy and measurement errors. We acknowledge support from NASA AISTNNX15AG84G (PI Pankratius) and NSF ACI1442997 (PI Pankratius).

  19. Recent Theoretical Advances in Analysis of AIRS/AMSU Sounding Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2007-01-01

    AIRS was launched on EOS Aqua on May 4,2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. This paper describes the AIRS Science Team Version 5.0 retrieval algorithm. Starting in early 2007, the Goddard DAAC will use this algorithm to analyze near real time AIRS/AMSU observations. These products are then made available to the scientific community for research purposes. The products include twice daily measurements of the Earth's three dimensional global temperature, water vapor, and ozone distribution as well as cloud cover. In addition, accurate twice daily measurements of the earth's land and ocean temperatures are derived and reported. Scientists use this important set of observations for two major applications. They provide important information for climate studies of global and regional variability and trends of different aspects of the earth's atmosphere. They also provide information for researchers to improve the skill of weather forecasting. A very important new product of the AIRS Version 5 algorithm is accurate case-by-case error estimates of the retrieved products. This heightens their utility for use in both weather and climate applications. These error estimates are also used directly for quality control of the retrieved products. Version 5 also allows for accurate quality controlled AIRS only retrievals, called "Version 5 AO retrievals" which can be used as a backup methodology if AMSU fails. Examples of the accuracy of error estimates and quality controlled retrieval products of the AIRS/AMSU Version 5 and Version 5 AO algorithms are given, and shown to be significantly better than the previously used Version 4 algorithm. Assimilation of Version 5 retrievals are also shown to significantly improve forecast skill, especially when the case-by-case error estimates are utilized in the data assimilation process.

  20. Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, D.W.; Forester, J.A.; Bley, D.C.

    1998-03-01

    A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method andmore » its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.« less

  1. A Fast Track approach to deal with the temporal dimension of crop water footprint

    NASA Astrophysics Data System (ADS)

    Tuninetti, Marta; Tamea, Stefania; Laio, Francesco; Ridolfi, Luca

    2017-07-01

    Population growth, socio-economic development and climate changes are placing increasing pressure on water resources. Crop water footprint is a key indicator in the quantification of such pressure. It is determined by crop evapotranspiration and crop yield, which can be highly variable in space and time. While the spatial variability of crop water footprint has been the objective of several investigations, the temporal variability remains poorly studied. In particular, some studies approached this issue by associating the time variability of crop water footprint only to yield changes, while considering evapotranspiration patterns as marginal. Validation of this Fast Track approach has yet to be provided. In this Letter we demonstrate its feasibility through a comprehensive validation, an assessment of its uncertainty, and an example of application. Our results show that the water footprint changes are mainly driven by yield trends, while evapotranspiration plays a minor role. The error due to considering constant evapotranspiration is three times smaller than the uncertainty of the model used to compute the crop water footprint. These results confirm the suitability of the Fast Track approach and enable a simple, yet appropriate, evaluation of time-varying crop water footprint.

  2. Characterization of Water Vapor Fluxes by the Raman Lidar System Basil and the Univeristy of Cologne Wind Lidar in the Frame of the HD(CP)2 Observational Prototype Experiment - Hope

    NASA Astrophysics Data System (ADS)

    Di Girolamo, Paolo; Summa, Donato; Stelitano, Dario; Cacciani, Marco; Scoccione, Andrea; Schween, Jan H.

    2016-06-01

    Measurements carried out by the Raman lidar system BASIL and the University of Cologne wind lidar are reported to demonstrate the capability of these instruments to characterize water vapour fluxes within the Convective Boundary Layer (CBL). In order to determine the water vapour flux vertical profiles, high resolution water vapour and vertical wind speed measurements, with a temporal resolution of 1 sec and a vertical resolution of 15-90, are considered. Measurements of water vapour flux profiles are based on the application of covariance approach to the water vapour mixing ratio and vertical wind speed time series. The algorithms are applied to a case study (IOP 11, 04 May 2013) from the HD(CP)2 Observational Prototype Experiment (HOPE), held in Central Germany in the spring 2013. For this case study, the water vapour flux profile is characterized by increasing values throughout the CBL with lager values (around 0.1 g/kg m/s) in the entrainment region. The noise errors are demonstrated to be small enough to allow the derivation of water vapour flux profiles with sufficient accuracy.

  3. Quasi-eccentricity error modeling and compensation in vision metrology

    NASA Astrophysics Data System (ADS)

    Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin

    2018-04-01

    Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.

  4. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.

  5. An integrated study of earth resources in the state of California using remote sensing techniques

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. A weighted stratified double sample design using hardcopy LANDSAT-1 and ground data was utilized in developmental studies for snow water content estimation. Study results gave a correlation coefficient of 0.80 between LANDSAT sample units estimates of snow water content and ground subsamples. A basin snow water content estimate allowable error was given as 1.00 percent at the 99 percent confidence level with the same budget level utilized in conventional snow surveys. Several evapotranspiration estimation models were selected for efficient application at each level of data to be sampled. An area estimation procedure for impervious surface types of differing impermeability adjacent to stream channels was developed. This technique employs a double sample of 1:125,000 color infrared hightflight transparency data with ground or large scale photography.

  6. Stochastic Watershed Models for Risk Based Decision Making

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.

    2017-12-01

    Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation

  7. Comparisons of the error budgets associated with ground-based FTIR measurements of atmospheric CH4 profiles at Île de la Réunion and Jungfraujoch.

    NASA Astrophysics Data System (ADS)

    Vanhaelewyn, Gauthier; Duchatelet, Pierre; Vigouroux, Corinne; Dils, Bart; Kumps, Nicolas; Hermans, Christian; Demoulin, Philippe; Mahieu, Emmanuel; Sussmann, Ralf; de Mazière, Martine

    2010-05-01

    The Fourier Transform Infra Red (FTIR) remote measurements of atmospheric constituents at the observatories at Saint-Denis (20.90°S, 55.48°E, 50 m a.s.l., Île de la Réunion) and Jungfraujoch (46.55°N, 7.98°E, 3580 m a.s.l., Switzerland) are affiliated to the Network for the Detection of Atmospheric Composition Change (NDACC). The European NDACC FTIR data for CH4 were improved and homogenized among the stations in the EU project HYMN. One important application of these data is their use for the validation of satellite products, like the validation of SCIAMACHY or IASI CH4 columns. Therefore, it is very important that errors and uncertainties associated to the ground-based FTIR CH4 data are well characterized. In this poster we present a comparison of errors on retrieved vertical concentration profiles of CH4 between Saint-Denis and Jungfraujoch. At both stations, we have used the same retrieval algorithm, namely SFIT2 v3.92 developed jointly at the NASA Langley Research Center, the National Center for Atmospheric Research (NCAR) and the National Institute of Water and Atmosphere Research (NIWA) at Lauder, New Zealand, and error evaluation tools developed at the Belgian Institute for Space Aeronomy (BIRA-IASB). The error components investigated in this study are: smoothing, noise, temperature, instrumental line shape (ILS) (in particular the modulation amplitude and phase), spectroscopy (in particular the pressure broadening and intensity), interfering species and solar zenith angle (SZA) error. We will determine if the characteristics of the sites in terms of altitude, geographic locations and atmospheric conditions produce significant differences in the error budgets for the retrieved CH4 vertical profiles

  8. Assessment of meteorological uncertainties as they apply to the ASCENDS mission

    NASA Astrophysics Data System (ADS)

    Snell, H. E.; Zaccheo, S.; Chase, A.; Eluszkiewicz, J.; Ott, L. E.; Pawson, S.

    2011-12-01

    Many environment-oriented remote sensing and modeling applications require precise knowledge of the atmospheric state (temperature, pressure, water vapor, surface pressure, etc.) on a fine spatial grid with a comprehensive understanding of the associated errors. Coincident atmospheric state measurements may be obtained via co-located remote sensing instruments or by extracting these data from ancillary models. The appropriate technique for a given application depends upon the required accuracy. State-of-the-art mesoscale/regional numerical weather prediction (NWP) models operate on spatial scales of a few kilometers resolution, and global scale NWP models operate on scales of tens of kilometers. Remote sensing measurements may be made on spatial scale comparable to the measurement of interest. These measurements normally require a separate sensor, which increases the overall size, weight, power and complexity of the satellite payload. Thus, a comprehensive understanding of the errors associated with each of these approaches is a critical part of the design/characterization of a remote-sensing system whose measurement accuracy depends on knowledge of the atmospheric state. One of the requirements as part of the overall ASCENDS (Active Sensing of CO2 Emissions over Nights, Days, and Seasons) mission development is to develop a consistent set of atmospheric state variables (vertical temperature and water vapor profiles, and surface pressure) for use in helping to constrain overall retrieval error budget. If the error budget requires tighter uncertainties on ancillary atmospheric parameters than can be provided by NWP models and analyses, additional sensors may be required to reduce the overall measurement error and meet mission requirements. To this end we have used NWP models and reanalysis information to generate a set of atmospheric profiles which contain reasonable variability. This data consists of a "truth" set and a companion "measured" set of profiles. The truth set contains climatologically-relevant profiles of pressure, temperature and humidity with an accompanying surface pressure. The measured set consists of some number of instances of the truth set which have been perturbed to represent realistic measurement uncertainty for the truth profile using measurement error covariance matrices. The primary focus has been to develop matrices derived using information about the profile retrieval accuracy as documented for on-orbit sensor systems including AIRS, AMSU, ATMS, and CrIS. Surface pressure variability and uncertainty was derived from globally-compiled station pressure information. We generated an additional measurement set of profiles which represent the overall error within NWP models. These profile sets will allow for comprehensive trade studies for sensor system design and provide a basis for setting measurement requirements for co-located temperature, humidity sounders, determine the utility of NWP data to either replace or supplement collocated measurements, and to assess the overall end-to-end system performance of the sensor system. In this presentation we discuss the process by which we created these data sets and show their utility in performing trade studies for sensor system concepts and designs.

  9. Recovery of chemical Estimates by Field Inhomogeneity Neighborhood Error Detection (REFINED): Fat/Water Separation at 7T

    PubMed Central

    Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.

    2012-01-01

    I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815

  10. Recovery of chemical estimates by field inhomogeneity neighborhood error detection (REFINED): fat/water separation at 7 tesla.

    PubMed

    Narayan, Sreenath; Kalhan, Satish C; Wilson, David L

    2013-05-01

    To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.

  11. Identifying inaccuracies on emergency medicine residency applications

    PubMed Central

    Katz, Eric D; Shockley, Lee; Kass, Lawrence; Howes, David; Tupesis, Janis P; Weaver, Christopher; Sayan, Osman R; Hogan, Victoria; Begue, Jason; Vrocher, Diamond; Frazer, Jackie; Evans, Timothy; Hern, Gene; Riviello, Ralph; Rivera, Antonio; Kinoshita, Keith; Ferguson, Edward

    2005-01-01

    Background Previous trials have showed a 10–30% rate of inaccuracies on applications to individual residency programs. No studies have attempted to corroborate this on a national level. Attempts by residency programs to diminish the frequency of inaccuracies on applications have not been reported. We seek to clarify the national incidence of inaccuracies on applications to emergency medicine residency programs. Methods This is a multi-center, single-blinded, randomized, cohort study of all applicants from LCME accredited schools to involved EM residency programs. Applications were randomly selected to investigate claims of AOA election, advanced degrees and publications. Errors were reported to applicants' deans and the NRMP. Results Nine residencies reviewed 493 applications (28.6% of all applicants who applied to any EM program). 56 applications (11.4%, 95%CI 8.6–14.2%) contained at least one error. Excluding "benign" errors, 9.8% (95% CI 7.2–12.4%), contained at least one error. 41% (95% CI 35.0–47.0%) of all publications contained an error. All AOA membership claims were verified, but 13.7% (95%CI 4.4–23.1%) of claimed advanced degrees were inaccurate. Inter-rater reliability of evaluations was good. Investigators were reluctant to notify applicants' dean's offices and the NRMP. Conclusion This is the largest study to date of accuracy on application for residency and the first such multi-centered trial. High rates of incorrect data were found on applications. This data will serve as a baseline for future years of the project, with emphasis on reporting inaccuracies and warning applicants of the project's goals. PMID:16105178

  12. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  13. A Well-Calibrated Ocean Algorithm for Special Sensor Microwave/Imager

    NASA Technical Reports Server (NTRS)

    Wentz, Frank J.

    1997-01-01

    I describe an algorithm for retrieving geophysical parameters over the ocean from special sensor microwave/imager (SSM/I) observations. This algorithm is based on a model for the brightness temperature T(sub B) of the ocean and intervening atmosphere. The retrieved parameters are the near-surface wind speed W, the columnar water vapor V, the columnar cloud liquid water L, and the line-of-sight wind W(sub LS). I restrict my analysis to ocean scenes free of rain, and when the algorithm detects rain, the retrievals are discarded. The model and algorithm are precisely calibrated using a very large in situ database containing 37,650 SSM/I overpasses of buoys and 35,108 overpasses of radiosonde sites. A detailed error analysis indicates that the T(sub B) model rms accuracy is between 0.5 and 1 K and that the rms retrieval accuracies for wind, vapor, and cloud are 0.9 m/s, 1.2 mm, and 0.025 mm, respectively. The error in specifying the cloud temperature will introduce an additional 10% error in the cloud water retrieval. The spatial resolution for these accuracies is 50 km. The systematic errors in the retrievals are smaller than the rms errors, being about 0.3 m/s, 0.6 mm, and 0.005 mm for W, V, and L, respectively. The one exception is the systematic error in wind speed of -1.0 m/s that occurs for observations within +/-20 deg of upwind. The inclusion of the line-of-sight wind W(sub LS) in the retrieval significantly reduces the error in wind speed due to wind direction variations. The wind error for upwind observations is reduced from -3.0 to -1.0 m/s. Finally, I find a small signal in the 19-GHz, horizontal polarization (h(sub pol) T(sub B) residual DeltaT(sub BH) that is related to the effective air pressure of the water vapor profile. This information may be of some use in specifying the vertical distribution of water vapor.

  14. Use of O-15 water and C-11 butanol to measure cerebral blood flow (CBF) and water permeability with positron emission tomography (PET)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herscovitch, P.; Raichle, M.E.; Kilbourn, M.R.

    1985-05-01

    Tracers used to measure CBF with PET and the Kety autoradiographic approach should freely cross the blood-brain barrier. 0-15 water, which is not freely permeable, may underestimate CBF, especially at higher flows. The authors determined this under-estimation relative to flow measured with a freely diffusible tracer, C-11 butanol and used these data to calculate the extraction (E) and permeability surface area product (PS) for 0-15 water. Paired flow measurements were made with 0-15 water (CBF-wat) and C-11 butanol (CBF-but) in eight normal human subjects. Average CBF-but, 55.6 ml/(min . 100g) was significantly greater than CBF-water, 47.6 ml/(min . 100g). Themore » ratio of regional gray matter (GM) flow to white matter (WM) flow was significantly greater with C-11 butanol, indicating a greater underestimation of CBF with 0-15 water in the higher flow GM. Average E for water was 0.92 in WM and 0.82 in GM. The mean PS in GM, 148 ml/(min . 100g), was significantly greater than in WM, 94 ml/(min . 100g). Simulation studies demonstrated that a measurement error in CBF-wat or CBF-but causes an approximately equivalent error in E but a considerably larger error in PS due to the sensitivity of the equation, PS=-CBF . ln(1-E), to variations in E. Modest errors in E and PS result from tissue heterogeneity that occurs due to the limited spatial resolution of PET. The authors' measurements of E and PS for water are similar to data obtained by more invasive methods and demonstrate the ability of PET to measure brain water permeability.« less

  15. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    PubMed

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  16. The practical use of simplicity in developing ground water models

    USGS Publications Warehouse

    Hill, M.C.

    2006-01-01

    The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.

  17. Characterization of turbulent processes by the Raman lidar system BASIL during the HD(CP)2 observational prototype experiment - HOPE

    NASA Astrophysics Data System (ADS)

    Di Girolamo, Paolo; Summa, Donato; Stelitano, Dario; Cacciani, Marco; Scoccione, Andrea; Behrendt, Andreas; Wulfmeyer, Volker

    2017-02-01

    Measurements carried out by the Raman lidar system BASIL are reported to demonstrate the capability of this instrument to characterize turbulent processes within the Convective Boundary Layer (CBL). In order to resolve the vertical profiles of turbulent variables, high resolution water vapour and temperature measurements, with a temporal resolution of 10 sec and a vertical resolution of 90 and 30 m, respectively, are considered. Measurements of higher-order moments of the turbulent fluctuations of water vapour mixing ratio and temperature are obtained based on the application of spectral and auto-covariance analyses to the water vapour mixing ratio and temperature time series. The algorithms are applied to a case study (IOP 5, 20 April 2013) from the HD(CP)2 Observational Prototype Experiment (HOPE), held in Central Germany in the spring 2013. The noise errors are demonstrated to be small enough to allow the derivation of up to fourth-order moments for both water vapour mixing ratio and temperature fluctuations with sufficient accuracy.

  18. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  20. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  1. Simplifying and upscaling water resources systems models that combine natural and engineered components

    NASA Astrophysics Data System (ADS)

    McIntyre, N.; Keir, G.

    2014-12-01

    Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.

  2. Water quality modeling in the systems impact assessment model for the Klamath River basin - Keno, Oregon to Seiad Valley, California

    USGS Publications Warehouse

    Hanna, R. Blair; Campbell, Sharon G.

    2000-01-01

    This report describes the water quality model developed for the Klamath River System Impact Assessment Model (SIAM). The Klamath River SIAM is a decision support system developed by the authors and other US Geological Survey (USGS), Midcontinent Ecological Science Center staff to study the effects of basin-wide water management decisions on anadromous fish in the Klamath River. The Army Corps of Engineersa?? HEC5Q water quality modeling software was used to simulate water temperature, dissolved oxygen and conductivity in 100 miles of the Klamath River Basin in Oregon and California. The water quality model simulated three reservoirs and the mainstem Klamath River influenced by the Shasta and Scott River tributaries. Model development, calibration and two validation exercises are described as well as the integration of the water quality model into the SIAM decision support system software. Within SIAM, data are exchanged between the water quantity model (MODSIM), the water quality model (HEC5Q), the salmon population model (SALMOD) and methods for evaluating ecosystem health. The overall predictive ability of the water quality model is described in the context of calibration and validation error statistics. Applications of SIAM and the water quality model are described.

  3. Multisensor Capacitance Probes for Simultaneously Monitoring Rice Field Soil-Water- Crop-Ambient Conditions.

    PubMed

    Brinkhoff, James; Hornbuckle, John; Dowling, Thomas

    2017-12-26

    Multisensor capacitance probes (MCPs) have traditionally been used for soil moisture monitoring and irrigation scheduling. This paper presents a new application of these probes, namely the simultaneous monitoring of ponded water level, soil moisture, and temperature profile, conditions which are particularly important for rice crops in temperate growing regions and for rice grown with prolonged periods of drying. WiFi-based loggers are used to concurrently collect the data from the MCPs and ultrasonic distance sensors (giving an independent reading of water depth). Models are fit to MCP water depth vs volumetric water content (VWC) characteristics from laboratory measurements, variability from probe-to-probe is assessed, and the methodology is verified using measurements from a rice field throughout a growing season. The root-mean-squared error of the water depth calculated from MCP VWC over the rice growing season was 6.6 mm. MCPs are used to simultaneously monitor ponded water depth, soil moisture content when ponded water is drained, and temperatures in root, water, crop and ambient zones. The insulation effect of ponded water against cold-temperature effects is demonstrated with low and high water levels. The developed approach offers advantages in gaining the full soil-plant-atmosphere continuum in a single robust sensor.

  4. Validation of MODIS Aerosol Optical Depth Retrieval Over Land

    NASA Technical Reports Server (NTRS)

    Chu, D. A.; Kaufman, Y. J.; Ichoku, C.; Remer, L. A.; Tanre, D.; Holben, B. N.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Aerosol optical depths are derived operationally for the first time over land in the visible wavelengths by MODIS (Moderate Resolution Imaging Spectroradiometer) onboard the EOSTerra spacecraft. More than 300 Sun photometer data points from more than 30 AERONET (Aerosol Robotic Network) sites globally were used in validating the aerosol optical depths obtained during July - September 2000. Excellent agreement is found with retrieval errors within (Delta)tau=+/- 0.05 +/- 0.20 tau, as predicted, over (partially) vegetated surfaces, consistent with pre-launch theoretical analysis and aircraft field experiments. In coastal and semi-arid regions larger errors are caused predominantly by the uncertainty in evaluating the surface reflectance. The excellent fit was achieved despite the ongoing improvements in instrument characterization and calibration. This results show that MODIS-derived aerosol optical depths can be used quantitatively in many applications with cautions for residual clouds, snow/ice, and water contamination.

  5. Development and testing of a new ray-tracing approach to GNSS carrier-phase multipath modelling

    NASA Astrophysics Data System (ADS)

    Lau, Lawrence; Cross, Paul

    2007-11-01

    Multipath is one of the most important error sources in Global Navigation Satellite System (GNSS) carrier-phase-based precise relative positioning. Its theoretical maximum is a quarter of the carrier wavelength (about 4.8 cm for the Global Positioning System (GPS) L1 carrier) and, although it rarely reaches this size, it must clearly be mitigated if millimetre-accuracy positioning is to be achieved. In most static applications, this may be accomplished by averaging over a sufficiently long period of observation, but in kinematic applications, a modelling approach must be used. This paper is concerned with one such approach: the use of ray-tracing to reconstruct the error and therefore remove it. In order to apply such an approach, it is necessary to have a detailed understanding of the signal transmitted from the satellite, the reflection process, the antenna characteristics and the way that the reflected and direct signal are processed within the receiver. This paper reviews all of these and introduces a formal ray-tracing method for multipath estimation based on precise knowledge of the satellite reflector antenna geometry and of the reflector material and antenna characteristics. It is validated experimentally using GPS signals reflected from metal, water and a brick building, and is shown to be able to model most of the main multipath characteristics. The method will have important practical applications for correcting for multipath in well-constrained environments (such as at base stations for local area GPS networks, at International GNSS Service (IGS) reference stations, and on spacecraft), and it can be used to simulate realistic multipath errors for various performance analyses in high-precision positioning.

  6. An improved thermo-time domain reflectometry method for determination of ice contents in partially frozen soils

    NASA Astrophysics Data System (ADS)

    Tian, Zhengchao; Ren, Tusheng; Kojima, Yuki; Lu, Yili; Horton, Robert; Heitman, Joshua L.

    2017-12-01

    Measuring ice contents (θi) in partially frozen soils is important for both engineering and environmental applications. Thermo-time domain reflectometry (thermo-TDR) probes can be used to determine θi based on the relationship between θi and soil heat capacity (C). This approach, however, is accurate in partially frozen soils only at temperatures below -5 °C, and it performs poorly on clayey soils. In this study, we present and evaluate a soil thermal conductivity (λ)-based approach to determine θi with thermo-TDR probes. Bulk soil λ is described with a simplified de Vries model that relates λ to θi. From this model, θi is estimated using inverse modeling of thermo-TDR measured λ. Soil bulk density (ρb) and thermo-TDR measured liquid water content (θl) are also needed for both C-based and λ-based approaches. A theoretical analysis is performed to quantify the sensitivity of C-based and λ-based θi estimates to errors in these input parameters. The analysis indicates that the λ-based approach is less sensitive to errors in the inputs (C, λ, θl, and ρb) than is the C-based approach when the same or the same percentage errors occur. Further evaluations of the C-based and λ-based approaches are made using experimentally determined θi at different temperatures on eight soils with various textures, total water contents, and ρb. The results show that the λ-based thermo-TDR approach significantly improves the accuracy of θi measurements at temperatures ≤-5 °C. The root mean square errors of λ-based θi estimates are only half those of C-based θi. At temperatures of -1 and -2 °C, the λ-based thermo-TDR approach also provides reasonable θi, while the C-based approach fails. We conclude that the λ-based thermo-TDR method can reliably determine θi even at temperatures near the freezing point of water (0 °C).

  7. A Novel Device for Total Acoustic Output Measurement of High Power Transducers

    NASA Astrophysics Data System (ADS)

    Howard, S.; Twomey, R.; Morris, H.; Zanelli, C. I.

    2010-03-01

    The objective of this work was to develop a device for ultrasound power measurement applicable over a broad range of medical transducer types, orientations and powers, and which supports automatic measurements to simplify use and minimize errors. Considering all the recommendations from standards such as IEC 61161, an accurate electromagnetic null-balance has been designed for ultrasound power measurements. The sensing element is placed in the water to eliminate errors due to surface tension and water evaporation, and the motion and detection of force is constrained to one axis, to increase immunity to vibration from the floor, water sloshing and water surface waves. A transparent tank was designed so it could easily be submerged in a larger tank to accommodate large transducers or side-firing geometries, and can also be turned upside-down for upward-firing transducers. A vacuum lid allows degassing the water and target in situ. An external control module was designed to operate the sensing/driving loop and to communicate to a local computer for data logging. The sensing algorithm, which incorporates temperature compensation, compares the feedback force needed to cancel the motion for sources in the "on" and "off" states. These two states can be controlled by the control unit or manually by the user, under guidance by a graphical user interface (the system presents measured power live during collection). Software allows calibration to standard weights, or to independently calibrated acoustic sources. The design accommodates a variety of targets, including cone, rubber, brush targets and an oil-filled target for power measurement via buoyancy changes. Measurement examples are presented, including HIFU sources operating at powers from 1 to 100.

  8. Application of spectral decomposition algorithm for mapping water quality in a turbid lake (Lake Kasumigaura, Japan) from Landsat TM data

    NASA Astrophysics Data System (ADS)

    Oyama, Youichi; Matsushita, Bunkei; Fukushima, Takehiko; Matsushige, Kazuo; Imai, Akio

    The remote sensing of Case 2 water has been far less successful than that of Case 1 water, due mainly to the complex interactions among optically active substances (e.g., phytoplankton, suspended sediments, colored dissolved organic matter, and water) in the former. To address this problem, we developed a spectral decomposition algorithm (SDA), based on a spectral linear mixture modeling approach. Through a tank experiment, we found that the SDA-based models were superior to conventional empirical models (e.g. using single band, band ratio, or arithmetic calculation of band) for accurate estimates of water quality parameters. In this paper, we develop a method for applying the SDA to Landsat-5 TM data on Lake Kasumigaura, a eutrophic lake in Japan characterized by high concentrations of suspended sediment, for mapping chlorophyll-a (Chl-a) and non-phytoplankton suspended sediment (NPSS) distributions. The results show that the SDA-based estimation model can be obtained by a tank experiment. Moreover, by combining this estimation model with satellite-SRSs (standard reflectance spectra: i.e., spectral end-members) derived from bio-optical modeling, we can directly apply the model to a satellite image. The same SDA-based estimation model for Chl-a concentration was applied to two Landsat-5 TM images, one acquired in April 1994 and the other in February 2006. The average Chl-a estimation error between the two was 9.9%, a result that indicates the potential robustness of the SDA-based estimation model. The average estimation error of NPSS concentration from the 2006 Landsat-5 TM image was 15.9%. The key point for successfully applying the SDA-based estimation model to satellite data is the method used to obtain a suitable satellite-SRS for each end-member.

  9. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    This paper discusses the application of parameter estimation to highly unstable aircraft. It includes a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  10. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  11. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    NASA Astrophysics Data System (ADS)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a better choice for predicting SWCs with lower uncertainties when required data are available, and thus for designing water saving strategies for agriculture and for other environmental applications requiring estimates of SWCs.

  12. A physical retrieval of cloud liquid water over the global oceans using special sensor microwave/imager (SSM/I) observations

    NASA Astrophysics Data System (ADS)

    Greenwald, Thomas J.; Stephens, Graeme L.; Vonder Haar, Thomas H.; Jackson, Darren L.

    1993-10-01

    A method of remotely sensing integrated cloud liquid water over the oceans using spaceborne passive measurements from the special sensor microwave/imager (SSM/I) is described. The technique is comprised of a simple physical model that uses the 19.35- and 37-GHz channels of the SSM/I. The most comprehensive validation to date of cloud liquid water estimated from satellites is presented. This is accomplished through a comparison to independent ground-based microwave radiometer measurements of liquid water on San Nicolas Island, over the North Sea, and on Kwajalein and Saipan Islands in the western Pacific. In areas of marine stratocumulus clouds off the coast of California a further comparison is made to liquid water inferred from advanced very high resolution radiometer (AVHRR) visible reflectance measurements. The results are also compared qualitatively with near-coincident satellite imagery and with other existing microwave methods in selected regions. These comparisons indicate that the liquid water amounts derived from the simple scheme are consistent with the ground-based measurements for nonprecipitating cloud systems in the subtropics and middle to high latitudes. The comparison in the tropics, however, was less conclusive. Nevertheless, the retrieval method appears to have general applicability over most areas of the global oceans. An observational measure of the minimum uncertainty in the retrievals is determined in a limited number of known cloud-free areas, where the liquid water amounts are found to have a low variability of 0.016 kg m-2. A simple sensitivity and error analysis suggests that the liquid water estimates have a theoretical relative error typically ranging from about 25% to near 40% depending on the atmospheric/surface conditions and on the amount of liquid water present in the cloud. For the global oceans as a whole the average cloud liquid water is determined to be about 0.08 kg m-2. The major conclusion of this paper is that reasonably accurate amounts of cloud liquid water can be retrieved from SSM/I observations for nonprecipitating cloud systems, particularly in areas of persistent stratocumulus clouds, with less accurate retrievals in tropical regions.

  13. A Physical Validation Program for the GPM Mission

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2003-01-01

    The GPM mission is currently planned for start in the late 2007 - early 2008 time frame. Its main scientific goal is to help answer pressing scientific problems arising within the context of global and regional water cycling. These problems cut across a hierarchy of scales and include climate-water cycle interactions, techniques for improving weather and climate predictions, and better methods for combining observed precipitation with hydrometeorological prediction models for applications to hazardous flood-producing storms, seasonal flood draught conditions, and fresh water resource assessments. The GPM mission will expand the scope of precipitation measurement through the use of a constellation of some 9 satellites, one of which will be an advanced TRMM-like core satellite carrying a dual-frequency Ku-Ka band precipitation radar and an advanced, multifrequency passive microwave radiometer with vertical-horizontal polarization discrimination. The other constellation members will include new dedicated satellites and co-existing operational/research satellites carrying similar (but not identical) passive microwave radiometers. The goal of the constellation is to achieve approximately 3-hour sampling at any spot on the globe -- continuously. The constellation's orbit architecture will consist of a mix of sun-synchronous and non-sun-synchronous satellites with the core satellite providing measurements of cloud-precipitation microphysical processes plus calibration-quality rainrate retrievals to be used with the other retrieval information to ensure bias-free constellation coverage. A major requirement before the retrieved rainfall information generated by the GPM mission can be used effectively by prognostic models to improve weather forecasts, hydrometeorological forecasts, and climate model reanalysis simulations is a capability to quantify the error characteristics of the retrievals. A solution for this problem has been upheld in past precipitation missions because of the lack of suitable error modeling systems incorporated into the validation programs and data distribution systems. An overview of how NASA intends to overcome this problem for the GPM mission using a physically-based error modeling approach within a multi-faceted validation program is described. The solution is to first identify specific user requirements and then determine the most stringent of these requirements that embodies all essential error characterization information needed by the entire user community. In the context of NASA s scientific agenda for the GPM mission, the most stringent user requirement is found within the data assimilation community. The fundamental theory of data assimilation vis-a-vis ingesting satellite precipitation information into the pre-forecast initializations is based on quantifying the conditional bias and precision errors of individual rain retrievals, and the space-time structure of the precision error (i.e., the spatial-temporal error covariance). By generating the hardware and software capability to produce this information in a near real-time fashion, and to couple the derived quantitative error properties to the actual retrieved rainrates, all key validation users can be satisfied. The talk will describe the essential components of the hardware and software systems needed to generate such near real-time error properties, as well as the various paradigm shifts needed within the validation community to produce a validation program relevant to the precipitation user community.

  14. Error-rate prediction for programmable circuits: methodology, tools and studied cases

    NASA Astrophysics Data System (ADS)

    Velazco, Raoul

    2013-05-01

    This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).

  15. Effects of postexercise ice-water and room-temperature water immersion on the sensory organization of balance control and lower limb proprioception in amateur rugby players: A randomized controlled trial.

    PubMed

    Chow, Gary C C; Yam, Timothy T T; Chung, Joanne W Y; Fong, Shirley S M

    2017-02-01

    This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. One minute of postexercise IWI or RWI did not impair rugby players' sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention.

  16. Effects of postexercise ice-water and room-temperature water immersion on the sensory organization of balance control and lower limb proprioception in amateur rugby players

    PubMed Central

    Chow, Gary C.C.; Yam, Timothy T.T.; Chung, Joanne W.Y.; Fong, Shirley S.M.

    2017-01-01

    Abstract Background: This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Methods: Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. Results: There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. Conclusion: One minute of postexercise IWI or RWI did not impair rugby players’ sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention. PMID:28207546

  17. Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bautista-Gomez, Leonardo; Cappello, Franck

    Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less

  18. Vulnerability of ground water to atrazine leaching in Kent County, Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Luukkonen, C.L.

    1997-01-01

    A steady-state model of pesticide leaching through the unsaturated zone was used with readily available hydrologic, lithologic, and pesticide characteristics to estimate the vulnerability of the near-surface aquifer to atrazine contamination from non-point sources in Kent County, Michigan. The modelcomputed fraction of atrazine remaining at the water table, RM, was used as the vulnerability criterion; time of travel to the water table also was computed. Model results indicate that the average fraction of atrazine remaining at the water table was 0.039 percent; the fraction ranged from 0 to 3.6 percent. Time of travel of atrazine from the soil surface to the water table averaged 17.7 years and ranged from 2.2 to 118 years.Three maps were generated to present three views of the same atrazine vulnerability characteristics using different metrics (nonlinear transformations of the computed fractions remaining). The metrics were chosen because of the highly (right) skewed distribution of computed fractions. The first metric, rm = RMλ (where λ was 0.0625), depicts a relatively uniform distribution of vulnerability across the county with localized areas of high and low vulnerability visible. The second metric, rmλ-0.5, depicts about one-half the county at low vulnerability with discontinuous patterns of high vulnerability evident. In the third metric, rmλ-1.0 (RM), more than 95 percent of the county appears to have low vulnerability; small, distinct areas of high vulnerability are present.Aquifer vulnerability estimates in the RM metric were used with a steady-state, uniform atrazine application rate to compute a potential concentration of atrazine in leachate reaching the water table. The average estimated potential atrazine concentration in leachate at the water table was 0.16 μg/L (micrograms per liter) in the model area; estimated potential concentrations ranged from 0 to 26 μg/L. About 2 percent of the model area had estimated potential atrazine concentrations in leachate at the water table that exceeded the USEPA (U.S. Environmental Protection Agency) maximum contaminant level of 3 μg/L.Uncertainty analyses were used to assess effects of parameter uncertainty and spatial interpolation error on the variability of the estimated fractions of atrazine remaining at the water table. Results of Monte Carlo simulations indicate that parameter uncertainty is associated with a standard error of 0.0875 in the computed fractions (in the rm metric). Results of kriging analysis indicate that errors in spatial interpolation are associated with a standard error of 0.146 (in the rm metric). Thus, uncertainty in fractions remaining is primarily associated with spatial interpolation error, which can be reduced by increasing the density of points where the leaching model is applied.A sensitivity analysis indicated which of 13 hydrologic, lithologic, and pesticide characteristics were influential in determining fractions of atrazine remaining at the water table. Results indicate that fractions remaining are most sensitive to the unit changes in pesticide half life and in organic-carbon content in soils and unweathered rocks, and least sensitive to infiltration rates.The leaching model applied in this report provides an estimate of the vulnerability of the near-surface aquifer in Kent County to contamination by atrazine. The vulnerability estimate is related to water-quality criteria developed by the USEPA to help assess potential risks from atrazine to the near-surface aquifer. However, atrazine accounts for only 28 percent of the herbicide use in the county; additional potential for contamination exists from other pesticides and pesticide metabolites. Therefore, additional work is needed to develop a comprehensive understanding of the relative risks associated with specific pesticides. The modeling approach described in this report provides a technique for estimating relative vulnerabilities to specific pesticides and for helping to assess potential risks.

  19. Extracting Diffusion Constants from Echo-Time-Dependent PFG NMR Data Using Relaxation-Time Information

    NASA Astrophysics Data System (ADS)

    Vandusschoten, D.; Dejager, P. A.; Vanas, H.

    Heterogeneous (bio)systems are often characterized by several water-containing compartments that differ in relaxation time values and diffusion constants. Because of the relatively small differences among these diffusion constants, nonoptimal measuring conditions easily lead to the conclusion that a single diffusion constant suffices to describe the water mobility in a heterogeneous (bio)system. This paper demonstrates that the combination of a T2 measurement and diffusion measurements at various echo times (TE), based on the PFG MSE sequence, enables the accurate determination of diffusion constants which are less than a factor of 2 apart. This new method gives errors of the diffusion constant below 10% when two fractions are present, while the standard approach of a biexponential fit to the diffusion data in identical circumstances gives larger (>25%) errors. On application of this approach to water in apple parenchyma tissue, the diffusion constant of water in the vacuole of the cells ( D = 1.7 × 10 -9 m 2/s) can be distinguished from that of the cytoplasm ( D = 1.0 × 10 -9 m 2/s). Also, for mung bean seedlings, the cell size determined by PFG MSE measurements increased from 65 to 100 μm when the echo time increased from 150 to 900 ms, demonstrating that the interpretation of PFG SE data used to investigate cell sizes is strongly dependent on the T2 values of the fractions within the sample. Because relaxation times are used to discriminate the diffusion constants, we propose to name this approach diffusion analysis by relaxation- time- separated (DARTS) PFG NMR.

  20. New ghost-node method for linking different models with varied grid refinement

    USGS Publications Warehouse

    James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.

    2006-01-01

    A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.

  1. Geostatistics-based groundwater-level monitoring network design and its application to the Upper Floridan aquifer, USA.

    PubMed

    Bhat, Shirish; Motz, Louis H; Pathak, Chandra; Kuebler, Laura

    2015-01-01

    A geostatistical method was applied to optimize an existing groundwater-level monitoring network in the Upper Floridan aquifer for the South Florida Water Management District in the southeastern United States. Analyses were performed to determine suitable numbers and locations of monitoring wells that will provide equivalent or better quality groundwater-level data compared to an existing monitoring network. Ambient, unadjusted groundwater heads were expressed as salinity-adjusted heads based on the density of freshwater, well screen elevations, and temperature-dependent saline groundwater density. The optimization of the numbers and locations of monitoring wells is based on a pre-defined groundwater-level prediction error. The newly developed network combines an existing network with the addition of new wells that will result in a spatial distribution of groundwater monitoring wells that better defines the regional potentiometric surface of the Upper Floridan aquifer in the study area. The network yields groundwater-level predictions that differ significantly from those produced using the existing network. The newly designed network will reduce the mean prediction standard error by 43% compared to the existing network. The adoption of a hexagonal grid network for the South Florida Water Management District is recommended to achieve both a uniform level of information about groundwater levels and the minimum required accuracy. It is customary to install more monitoring wells for observing groundwater levels and groundwater quality as groundwater development progresses. However, budget constraints often force water managers to implement cost-effective monitoring networks. In this regard, this study provides guidelines to water managers concerned with groundwater planning and monitoring.

  2. Uncertainty in sap flow-based transpiration due to xylem properties

    NASA Astrophysics Data System (ADS)

    Looker, N. T.; Hu, J.; Martin, J. T.; Jencso, K. G.

    2014-12-01

    Transpiration, the evaporative loss of water from plants through their stomata, is a key component of the terrestrial water balance, influencing streamflow as well as regional convective systems. From a plant physiological perspective, transpiration is both a means of avoiding destructive leaf temperatures through evaporative cooling and a consequence of water loss through stomatal uptake of carbon dioxide. Despite its hydrologic and ecological significance, transpiration remains a notoriously challenging process to measure in heterogeneous landscapes. Sap flow methods, which estimate transpiration by tracking the velocity of a heat pulse emitted into the tree sap stream, have proven effective for relating transpiration dynamics to climatic variables. To scale sap flow-based transpiration from the measured domain (often <5 cm of tree cross-sectional area) to the whole-tree level, researchers generally assume constancy of scale factors (e.g., wood thermal diffusivity (k), radial and azimuthal distributions of sap velocity, and conducting sapwood area (As)) through time, across space, and within species. For the widely used heat-ratio sap flow method (HRM), we assessed the sensitivity of transpiration estimates to uncertainty in k (a function of wood moisture content and density) and As. A sensitivity analysis informed by distributions of wood moisture content, wood density and As sampled across a gradient of water availability indicates that uncertainty in these variables can impart substantial error when scaling sap flow measurements to the whole tree. For species with variable wood properties, the application of the HRM assuming a spatially constant k or As may systematically over- or underestimate whole-tree transpiration rates, resulting in compounded error in ecosystem-scale estimates of transpiration.

  3. The Foundation GPS Water Vapor Inversion and its Application Research

    NASA Astrophysics Data System (ADS)

    Liu, R.; Lee, T.; Lv, H.; Fan, C.; Liu, Q.

    2018-04-01

    Using GPS technology to retrieve atmospheric water vapor is a new water vapor detection method, which can effectively compensate for the shortcomings of conventional water vapor detection methods, to provide high-precision, large-capacity, near real-time water vapor information. In-depth study of ground-based GPS detection of atmospheric water vapor technology aims to further improve the accuracy and practicability of GPS inversion of water vapor and to explore its ability to detect atmospheric water vapor information to better serve the meteorological services. In this paper, the influence of the setting parameters of initial station coordinates, satellite ephemeris and solution observation on the total delay accuracy of the tropospheric zenith is discussed based on the observed data. In this paper, the observations obtained from the observation network consisting of 8 IGS stations in China in June 2013 are used to inverse the water vapor data of the 8 stations. The data of Wuhan station is further selected and compared with the data of Nanhu Sounding Station in Wuhan The error between the two data was between -6mm-6mm, and the trend of the two was almost the same, the correlation reached 95.8 %. The experimental results also verify the reliability of ground-based GPS inversion of water vapor technology.

  4. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  5. Evaluation of coastal zone color scanner diffuse attenuation coefficient algorithms for application to coastal waters

    NASA Astrophysics Data System (ADS)

    Mueller, James L.; Trees, Charles C.; Arnone, Robert A.

    1990-09-01

    The Coastal Zone Color Scannez (ZCS) and associated atmospheric and in-water algorithms have allowed synoptic analyses of regional and large scale variability of bio-optical properties [phytoplankton pigments and diffuse auenuation coefficient K(490)}. Austin and Petzold (1981) developed a robust in-water K(490) algorithm which related the diffuse attenuation coefficient at one optical depth [1/K(490)] to the ratio of the water-leaving radiances at 443 and 550 nm. Their regression analysis included diffuse attenuation coefficients K(490) up to 0.40 nm, but excluded data from estuarine areas, and other Case II waters, where the optical properties are not predominantly determined by phytoplankton. In these areas, errors are induced in the retrieval of remote sensing K(490) by extremely low water-leaving radiance at 443 nm [Lw(443) as viewed at the sensor may only be 1 or 2 digital counts], and improved cury can be realized using algorithms based on wavelengths where Lw(λ) is larger. Using ocean optical profiles quired by the Visibility Laboratory, algorithms are developed to predict K(490) from ratios of water leaving radiances at 520 and 670, as well as 443 and 550 nm.

  6. Evaluating Snowmelt Runoff Processes Using Stable Isotopes in a Permafrost Hillslope

    NASA Astrophysics Data System (ADS)

    Carey, S. K.

    2004-05-01

    Conceptual understanding of runoff generation in permafrost regions have been derived primarily from hydrometric information, with isotope and hydrochemical data having only limited application in delineating sources and pathways of water. Furthermore, when stable isotope data are used to infer runoff processes, it often provides conflicting results from hydrometric measurements. In a small subarctic alpine catchment within the Wolf Creek Research Basin, Yukon, Canada, experiments were conducted during the melt period of 2002 and 2003 to trace the stable isotopic signature (d18O) of meltwater from a melting snowpack into permafrost soils and laterally to the stream to identify runoff processes and evaluate sources of error for traditional hydrograph separation studies in snowmelt-dominated permafrost basins. Isotopic variability in the snowpack was recorded at 0.1 m depth intervals during the melt period and compared with the meltwater isotopic signature at the snowpack base collected in lysimeters. Throughout the melt period in both years, there was an isotopic enrichment of meltwater as the season progressed. A downslope transect of wells and piezometers were used to evaluate the influence of infiltrating meltwater and thawing ground on the subsurface d18O signature. As melt began, meltwater infiltrated the frozen porous organic layer, leading to liquid water saturation in the unsaturated pore spaces. Water sampled during this initial melt stage show soil water d18O mirroring that of the meltwater signal. As the melt season progressed, frozen soil began to melt, mixing enriched pre-melt soil water with meltwater. This mixing increased the overall value of d18O obtained from the soil, which gradually increased as thaw progressed. At the end of snowmelt, soil water had a d18O value similar to values from the previous fall, suggesting that much of the initial snowmelt water had been flushed from the hillslope. Results from the hillslope scale are compared with two-component hydrograph separations and sources of error are discussed.

  7. Accuracy, precision, usability, and cost of portable silver test methods for ceramic filter factories.

    PubMed

    Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S

    2017-02-01

    Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.

  8. Increased performance in the short-term water demand forecasting through the use of a parallel adaptive weighting strategy

    NASA Astrophysics Data System (ADS)

    Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.

    2018-03-01

    Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.

  9. 26 CFR 1.1312-8 - Law applicable in determination of error.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Law applicable in determination of error. The question whether there was an erroneous inclusion... provisions of the internal revenue laws applicable with respect to the year as to which the inclusion.... The fact that the inclusion, exclusion, omission, allowance, disallowance, recognition, or...

  10. Mass imbalances in EPANET water-quality simulations

    NASA Astrophysics Data System (ADS)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    2018-04-01

    EPANET is widely employed to simulate water quality in water distribution systems. However, in general, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results only for short water-quality time steps. Overly long time steps can yield errors in concentration estimates and can result in situations in which constituent mass is not conserved. The use of a time step that is sufficiently short to avoid these problems may not always be feasible. The absence of EPANET errors or warnings does not ensure conservation of mass. This paper provides examples illustrating mass imbalances and explains how such imbalances can occur because of fundamental limitations in the water-quality routing algorithm used in EPANET. In general, these limitations cannot be overcome by the use of improved water-quality modeling practices. This paper also presents a preliminary event-driven approach that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, toward those obtained using the preliminary event-driven approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations. The results presented in this paper should be of value to those who perform water-quality simulations using EPANET or use the results of such simulations, including utility managers and engineers.

  11. Benefit of Complete State Monitoring For GPS Realtime Applications With Geo++ Gnsmart

    NASA Astrophysics Data System (ADS)

    Wübbena, G.; Schmitz, M.; Bagge, A.

    Today, the demand for precise positioning at the cm-level in realtime is worldwide growing. An indication for this is the number of operational RTK network installa- tions, which use permanent reference station networks to derive corrections for dis- tance dependent GPS errors and to supply corrections to RTK users in realtime. Gen- erally, the inter-station distances in RTK networks are selected at several tens of km in range and operational installations cover areas of up to 50000 km x km. However, the separation of the permanent reference stations can be increased to sev- eral hundred km, while a correct modeling of all error components is applied. Such networks can be termed as sparse RTK networks, which cover larger areas with a reduced number of stations. The undifferenced GPS observable is best suited for this task estimating the complete state of a permanent GPS network in a dynamic recursive Kalman filter. A rigorous adjustment of all simultaneous reference station data is re- quired. The sparse network design essentially supports the state estimation through its large spatial extension. The benefit of the approach and its state modeling of all GPS error components is a successful ambiguity resolution in realtime over long distances. The above concepts are implemented in the operational GNSMART (GNSS State Monitoring and Representation Technique) software of Geo++. It performs a state monitoring of all error components at the mm-level, because for RTK networks this accuracy is required to sufficiently represent the distance dependent errors for kine- matic applications. One key issue of the modeling is the estimation of clocks and hard- ware delays in the undifferenced approach. This pre-requisite subsequently allows for the precise separation and modeling of all other error components. Generally most of the estimated parameters are considered as nuisance parameters with respect to pure positioning tasks. As the complete state vector of GPS errors is available in a GPS realtime network, additional information besides position can be derived e.g. regional precise satellite clocks, orbits, total ionospheric electron content, tropospheric water vapor distribution, and also dynamic reference station movements. The models of GNSMART are designed to work with regional, continental or even global data. Results from GNSMART realtime networks with inter-station distances of several hundred km are presented to demonstrate the benefits of the operational implemented concepts.

  12. An Efficient Silent Data Corruption Detection Method with Error-Feedback Control and Even Sampling for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Berrocal, Eduardo; Cappello, Franck

    The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less

  13. Noncontact methods for measuring water-surface elevations and velocities in rivers: Implications for depth and discharge extraction

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark

    2016-01-01

    Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.

  14. Pedal Application Errors

    DOT National Transportation Integrated Search

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  15. USGS Blind Sample Project: monitoring and evaluating laboratory analytical quality

    USGS Publications Warehouse

    Ludtke, Amy S.; Woodworth, Mark T.

    1997-01-01

    The U.S. Geological Survey (USGS) collects and disseminates information about the Nation's water resources. Surface- and ground-water samples are collected and sent to USGS laboratories for chemical analyses. The laboratories identify and quantify the constituents in the water samples. Random and systematic errors occur during sample handling, chemical analysis, and data processing. Although all errors cannot be eliminated from measurements, the magnitude of their uncertainty can be estimated and tracked over time. Since 1981, the USGS has operated an independent, external, quality-assurance project called the Blind Sample Project (BSP). The purpose of the BSP is to monitor and evaluate the quality of laboratory analytical results through the use of double-blind quality-control (QC) samples. The information provided by the BSP assists the laboratories in detecting and correcting problems in the analytical procedures. The information also can aid laboratory users in estimating the extent that laboratory errors contribute to the overall errors in their environmental data.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  17. Global Precipitation Measurement (GPM) Ground Validation (GV) Science Implementation Plan

    NASA Technical Reports Server (NTRS)

    Petersen, Walter A.; Hou, Arthur Y.

    2008-01-01

    For pre-launch algorithm development and post-launch product evaluation Global Precipitation Measurement (GPM) Ground Validation (GV) goes beyond direct comparisons of surface rain rates between ground and satellite measurements to provide the means for improving retrieval algorithms and model applications.Three approaches to GPM GV include direct statistical validation (at the surface), precipitation physics validation (in a vertical columns), and integrated science validation (4-dimensional). These three approaches support five themes: core satellite error characterization; constellation satellites validation; development of physical models of snow, cloud water, and mixed phase; development of cloud-resolving model (CRM) and land-surface models to bridge observations and algorithms; and, development of coupled CRM-land surface modeling for basin-scale water budget studies and natural hazard prediction. This presentation describes the implementation of these approaches.

  18. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers.

  19. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  20. Specificity of Atmospheric Correction of Satellite Data on Ocean Color in the Far East

    NASA Astrophysics Data System (ADS)

    Aleksanin, A. I.; Kachur, V. A.

    2017-12-01

    Calculation errors in ocean-brightness coefficients in the Far Eastern are analyzed for two atmospheric correction algorithms (NIR and MUMM). The daylight measurements in different water types show that the main error component is systematic and has a simple dependence on the magnitudes of the coefficients. The causes of the error behavior are considered. The most probable explanation for the large errors in ocean-color parameters in the Far East is a high concentration of continental aerosol absorbing light. A comparison between satellite and in situ measurements at AERONET stations in the United States and South Korea has been made. It is shown the errors in these two regions differ by up to 10 times upon close water turbidity and relatively high aerosol optical-depth computation precision in the case of using the NIR correction of the atmospheric effect.

  1. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  2. Software reliability: Application of a reliability model to requirements error analysis

    NASA Technical Reports Server (NTRS)

    Logan, J.

    1980-01-01

    The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.

  3. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  4. Error Correction: A Cognitive-Affective Stance

    ERIC Educational Resources Information Center

    Saeed, Aziz Thabit

    2007-01-01

    This paper investigates the application of some of the most frequently used writing error correction techniques to see the extent to which this application takes learners' cognitive and affective characteristics into account. After showing how unlearned application of these styles could be discouraging and/or damaging to students, the paper…

  5. Probability of misclassifying biological elements in surface waters.

    PubMed

    Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna

    2017-11-24

    Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.

  6. Effects of sterilization treatments on the analysis of TOC in water samples.

    PubMed

    Shi, Yiming; Xu, Lingfeng; Gong, Dongqin; Lu, Jun

    2010-01-01

    Decomposition experiments conducted with and without microbial processes are commonly used to study the effects of environmental microorganisms on the degradation of organic pollutants. However, the effects of biological pretreatment (sterilization) on organic matter often have a negative impact on such experiments. Based on the principle of water total organic carbon (TOC) analysis, the effects of physical sterilization treatments on determination of TOC and other water quality parameters were investigated. The results revealed that two conventional physical sterilization treatments, autoclaving and 60Co gamma-radiation sterilization, led to the direct decomposition of some organic pollutants, resulting in remarkable errors in the analysis of TOC in water samples. Furthermore, the extent of the errors varied with the intensity and the duration of sterilization treatments. Accordingly, a novel sterilization method for water samples, 0.45 microm micro-filtration coupled with ultraviolet radiation (MCUR), was developed in the present study. The results indicated that the MCUR method was capable of exerting a high bactericidal effect on the water sample while significantly decreasing the negative impact on the analysis of TOC and other water quality parameters. Before and after sterilization treatments, the relative errors of TOC determination could be controlled to lower than 3% for water samples with different categories and concentrations of organic pollutants by using MCUR.

  7. Extending High-Order Flux Operators on Spherical Icosahedral Grids and Their Applications in the Framework of a Shallow Water Model

    NASA Astrophysics Data System (ADS)

    Zhang, Yi

    2018-01-01

    This study extends a set of unstructured third/fourth-order flux operators on spherical icosahedral grids from two perspectives. First, the fifth-order and sixth-order flux operators of this kind are further extended, and the nominally second-order to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the standard fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. Even-order operators show higher limiter sensitivity than the odd-order operators. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second-order and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, high-order flux operators with higher damping coefficients, which essentially behave like the Anticipated Potential Vorticity Method, present better results.

  8. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  9. A modified Holly-Preissmann scheme for simulating sharp concentration fronts in streams with steep velocity gradients using RIV1Q

    NASA Astrophysics Data System (ADS)

    Liu, Zhao-wei; Zhu, De-jun; Chen, Yong-can; Wang, Zhi-gang

    2014-12-01

    RIV1Q is the stand-alone water quality program of CE-QUAL-RIV1, a hydraulic and water quality model developed by U.S. Army Corps of Engineers Waterways Experiment Station. It utilizes an operator-splitting algorithm and the advection term in governing equation is treated using the explicit two-point, fourth-order accurate, Holly-Preissmann scheme, in order to preserve numerical accuracy for advection of sharp gradients in concentration. In the scheme, the spatial derivative of the transport equation, where the derivative of velocity is included, is introduced to update the first derivative of dependent variable. In the stream with larger cross-sectional variation, steep velocity gradient can be easily found and should be estimated correctly. In the original version of RIV1Q, however, the derivative of velocity is approximated by a finite difference which is first-order accurate. Its leading truncation error leads to the numerical error of concentration which is related with the velocity and concentration gradients and increases with the decreasing Courant number. The simulation may also be unstable when a sharp velocity drop occurs. In the present paper, the derivative of velocity is estimated with a modified second-order accurate scheme and the corresponding numerical error of concentration decreases. Additionally, the stability of the simulation is improved. The modified scheme is verified with a hypothetical channel case and the results demonstrate that satisfactory accuracy and stability can be achieved even when the Courant number is very low. Finally, the applicability of the modified scheme is discussed.

  10. Estimation of water level and steam temperature using ensemble Kalman filter square root (EnKF-SR)

    NASA Astrophysics Data System (ADS)

    Herlambang, T.; Mufarrikoh, Z.; Karya, D. F.; Rahmalia, D.

    2018-04-01

    The equipment unit which has the most vital role in the steam-powered electric power plant is boiler. Steam drum boiler is a tank functioning to separate fluida into has phase and liquid phase. The existence in boiler system has a vital role. The controlled variables in the steam drum boiler are water level and the steam temperature. If the water level is higher than the determined level, then the gas phase resulted will contain steam endangering the following process and making the resulted steam going to turbine get less, and the by causing damages to pipes in the boiler. On the contrary, if less than the height of determined water level, the resulted height will result in dry steam likely to endanger steam drum. Thus an error was observed between the determined. This paper studied the implementation of the Ensemble Kalman Filter Square Root (EnKF-SR) method in nonlinear model of the steam drum boiler equation. The computation to estimate the height of water level and the temperature of steam was by simulation using Matlab software. Thus an error was observed between the determined water level and the steam temperature, and that of estimated water level and steam temperature. The result of simulation by Ensemble Kalman Filter Square Root (EnKF-SR) on the nonlinear model of steam drum boiler showed that the error was less than 2%. The implementation of EnKF-SR on the steam drum boiler r model comprises of three simulations, each of which generates 200, 300 and 400 ensembles. The best simulation exhibited the error between the real condition and the estimated result, by generating 400 ensemble. The simulation in water level in order of 0.00002145 m, whereas in the steam temperature was some 0.00002121 kelvin.

  11. Introduction to the Application of Web-Based Surveys.

    ERIC Educational Resources Information Center

    Timmerman, Annemarie

    This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…

  12. Purification of Logic-Qubit Entanglement.

    PubMed

    Zhou, Lan; Sheng, Yu-Bo

    2016-07-05

    Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network.

  13. Water-quality models to assess algal community dynamics, water quality, and fish habitat suitability for two agricultural land-use dominated lakes in Minnesota, 2014

    USGS Publications Warehouse

    Smith, Erik A.; Kiesling, Richard L.; Ziegeweid, Jeffrey R.

    2017-07-20

    Fish habitat can degrade in many lakes due to summer blue-green algal blooms. Predictive models are needed to better manage and mitigate loss of fish habitat due to these changes. The U.S. Geological Survey (USGS), in cooperation with the Minnesota Department of Natural Resources, developed predictive water-quality models for two agricultural land-use dominated lakes in Minnesota—Madison Lake and Pearl Lake, which are part of Minnesota’s sentinel lakes monitoring program—to assess algal community dynamics, water quality, and fish habitat suitability of these two lakes under recent (2014) meteorological conditions. The interaction of basin processes to these two lakes, through the delivery of nutrient loads, were simulated using CE-QUAL-W2, a carbon-based, laterally averaged, two-dimensional water-quality model that predicts distribution of temperature and oxygen from interactions between nutrient cycling, primary production, and trophic dynamics.The CE-QUAL-W2 models successfully predicted water temperature and dissolved oxygen on the basis of the two metrics of mean absolute error and root mean square error. For Madison Lake, the mean absolute error and root mean square error were 0.53 and 0.68 degree Celsius, respectively, for the vertical temperature profile comparisons; for Pearl Lake, the mean absolute error and root mean square error were 0.71 and 0.95 degree Celsius, respectively, for the vertical temperature profile comparisons. Temperature and dissolved oxygen were key metrics for calibration targets. These calibrated lake models also simulated algal community dynamics and water quality. The model simulations presented potential explanations for persistently large total phosphorus concentrations in Madison Lake, key differences in nutrient concentrations between these lakes, and summer blue-green algal bloom persistence.Fish habitat suitability simulations for cool-water and warm-water fish indicated that, in general, both lakes contained a large proportion of good-growth habitat and a sustained period of optimal growth habitat in the summer, without any periods of lethal oxythermal habitat. For Madison and Pearl Lakes, examples of important cool-water fish, particularly game fish, include northern pike (Esox lucius), walleye (Sander vitreus), and black crappie (Pomoxis nigromaculatus); examples of important warm-water fish include bluegill (Lepomis macrochirus), largemouth bass (Micropterus salmoides), and smallmouth bass (Micropterus dolomieu). Sensitivity analyses were completed to understand lake response effects through the use of controlled departures on certain calibrated model parameters and input nutrient loads. These sensitivity analyses also operated as land-use change scenarios because alterations in agricultural practices, for example, could potentially increase or decrease nutrient loads.

  14. Improved methods for estimating local terrestrial water dynamics from GRACE in the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Seyoum, Wondwosen M.; Milewski, Adam M.

    2017-12-01

    Investigating terrestrial water cycle dynamics is vital for understanding the recent climatic variability and human impacts in the hydrologic cycle. In this study, a downscaling approach was developed and tested, to improve the applicability of terrestrial water storage (TWS) anomaly data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission for understanding local terrestrial water cycle dynamics in the Northern High Plains region. A non-parametric, artificial neural network (ANN)-based model, was utilized to downscale GRACE data by integrating it with hydrological variables (e.g. soil moisture) derived from satellite and land surface model data. The downscaling model, constructed through calibration and sensitivity analysis, was used to estimate TWS anomaly for watersheds ranging from 5000 to 20,000 km2 in the study area. The downscaled water storage anomaly data were evaluated using water storage data derived from an (1) integrated hydrologic model, (2) land surface model (e.g. Noah), and (3) storage anomalies calculated from in-situ groundwater level measurements. Results demonstrate the ANN predicts monthly TWS anomaly within the uncertainty (conservative error estimate = 34 mm) for most of the watersheds. Seasonal derived groundwater storage anomaly (GWSA) from the ANN correlated well (r = ∼0.85) with GWSAs calculated from in-situ groundwater level measurements for a watershed size as small as 6000 km2. ANN downscaled TWSA matches closely with Noah-based TWSA compared to standard GRACE extracted TWSA at a local scale. Moreover, the ANN-downscaled change in TWS replicated the water storage variability resulting from the combined effect of climatic and human impacts (e.g. abstraction). The implications of utilizing finer resolution GRACE data for improving local and regional water resources management decisions and applications are clear, particularly in areas lacking in-situ hydrologic monitoring networks.

  15. Evaluation of the performance of hydrological variables derived from GLDAS-2 and MERRA-2 in Mexico

    NASA Astrophysics Data System (ADS)

    Real-Rangel, R. A.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.

    2017-12-01

    Hydrological studies have found in data assimilation systems and global reanalysis of land surface variables (e.g soil moisture, streamflow) a wide range of applications, from drought monitoring to water balance and hydro-climatology variability assessment. Indeed, these hydrological data sources have led to an improvement in developing and testing monitoring and prediction systems in poorly gauged regions of the world. This work tests the accuracy and error of land surface variables (precipitation, soil moisture, runoff and temperature) derived from the data assimilation reanalysis products GLDAS-2 and MERRA-2. Validate the performance of these data platforms must be thoroughly evaluated in order to consider the error of hydrological variables (i.e., precipitation, soil moisture, runoff and temperature) derived from the reanalysis products. For such purpose, a quantitative assessment was performed at 2,892 climatological stations, 42 stream gauges and 44 soil moisture probes located in Mexico and across different climate regimes (hyper-arid to tropical humid). Results show comparisons between these gridded products against ground-based observational stations for 1979-2014. The results of this analysis display a spatial distribution of errors and accuracy over Mexico discussing differences between climates, enabling the informed use of these products.

  16. Automated detection of cloud and cloud-shadow in single-date Landsat imagery using neural networks and spatial post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Michael J.; Hayes, Daniel J

    2014-01-01

    Use of Landsat data to answer ecological questions is contingent on the effective removal of cloud and cloud shadow from satellite images. We develop a novel algorithm to identify and classify clouds and cloud shadow, \\textsc{sparcs}: Spacial Procedures for Automated Removal of Cloud and Shadow. The method uses neural networks to determine cloud, cloud-shadow, water, snow/ice, and clear-sky membership of each pixel in a Landsat scene, and then applies a set of procedures to enforce spatial rules. In a comparison to FMask, a high-quality cloud and cloud-shadow classification algorithm currently available, \\textsc{sparcs} performs favorably, with similar omission errors for cloudsmore » (0.8% and 0.9%, respectively), substantially lower omission error for cloud-shadow (8.3% and 1.1%), and fewer errors of commission (7.8% and 5.0%). Additionally, textsc{sparcs} provides a measure of uncertainty in its classification that can be exploited by other processes that use the cloud and cloud-shadow detection. To illustrate this, we present an application that constructs obstruction-free composites of images acquired on different dates in support of algorithms detecting vegetation change.« less

  17. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.

  18. Effect of satellite formations and imaging modes on global albedo estimation

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.

    2016-05-01

    We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.

  19. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  20. Analytical three-point Dixon method: With applications for spiral water-fat imaging.

    PubMed

    Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G

    2016-02-01

    The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.

  1. Spectral Band Characterization for Hyperspectral Monitoring of Water Quality

    NASA Technical Reports Server (NTRS)

    Vermillion, Stephanie C.; Raqueno, Rolando; Simmons, Rulon

    2001-01-01

    A method for selecting the set of spectral characteristics that provides the smallest increase in prediction error is of interest to those using hyperspectral imaging (HSI) to monitor water quality. The spectral characteristics of interest to these applications are spectral bandwidth and location. Three water quality constituents of interest that are detectable via remote sensing are chlorophyll (CHL), total suspended solids (TSS), and colored dissolved organic matter (CDOM). Hyperspectral data provides a rich source of information regarding the content and composition of these materials, but often provides more data than an analyst can manage. This study addresses the spectral characteristics need for water quality monitoring for two reasons. First, determination of the greatest contribution of these spectral characteristics would greatly improve computational ease and efficiency. Second, understanding the spectral capabilities of different spectral resolutions and specific regions is an essential part of future system development and characterization. As new systems are developed and tested, water quality managers will be asked to determine sensor specifications that provide the most accurate and efficient water quality measurements. We address these issues using data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and a set of models to predict constituent concentrations.

  2. Extreme learning machine: a new alternative for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters.

    PubMed

    Liu, Zhijian; Li, Hao; Tang, Xindong; Zhang, Xinyu; Lin, Fan; Cheng, Kewei

    2016-01-01

    Heat collection rate and heat loss coefficient are crucial indicators for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, the direct determination requires complex detection devices and a series of standard experiments, wasting too much time and manpower. To address this problem, we previously used artificial neural networks and support vector machine to develop precise knowledge-based models for predicting the heat collection rates and heat loss coefficients of water-in-glass evacuated tube solar water heaters, setting the properties measured by "portable test instruments" as the independent variables. A robust software for determination was also developed. However, in previous results, the prediction accuracy of heat loss coefficients can still be improved compared to those of heat collection rates. Also, in practical applications, even a small reduction in root mean square errors (RMSEs) can sometimes significantly improve the evaluation and business processes. As a further study, in this short report, we show that using a novel and fast machine learning algorithm-extreme learning machine can generate better predicted results for heat loss coefficient, which reduces the average RMSEs to 0.67 in testing.

  3. Data from selected U.S. Geological Survey National Stream Water Quality Monitoring Networks

    USGS Publications Warehouse

    Alexander, Richard B.; Slack, James R.; Ludtke, Amy S.; Fitzgerald, Kathleen K.; Schertz, Terry L.

    1998-01-01

    A nationally consistent and well-documented collection of water quality and quantity data compiled during the past 30 years for streams and rivers in the United States is now available on CD-ROM and accessible over the World Wide Web. The data include measurements from two U.S. Geological Survey (USGS) national networks for 122 physical, chemical, and biological properties of water collected at 680 monitoring stations from 1962 to 1995, quality assurance information that describes the sample collection agencies, laboratories, analytical methods, and estimates of laboratory measurement error (bias and variance), and information on selected cultural and natural characteristics of the station watersheds. The data are easily accessed via user-supplied software including Web browser, spreadsheet, and word processor, or may be queried and printed according to user-specified criteria using the supplied retrieval software on CD-ROM. The water quality data serve a variety of scientific uses including research and educational applications related to trend detection, flux estimation, investigations of the effects of the natural environment and cultural sources on water quality, and the development of statistical methods for designing efficient monitoring networks and interpreting water resources data.

  4. Empirical parameterization of setup, swash, and runup

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.

    2006-01-01

    Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.

  5. Regression models of discharge and mean velocity associated with near-median streamflow conditions in Texas: utility of the U.S. Geological Survey discharge measurement database

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    A database containing more than 16,300 discharge values and ancillary hydraulic attributes was assembled from summaries of discharge measurement records for 391 USGS streamflow-gauging stations (streamgauges) in Texas. Each discharge is between the 40th- and 60th-percentile daily mean streamflow as determined by period-of-record, streamgauge-specific, flow-duration curves. Each discharge therefore is assumed to represent a discharge measurement made for near-median streamflow conditions, and such conditions are conceptualized as representative of midrange to baseflow conditions in much of the state. The hydraulic attributes of each discharge measurement included concomitant cross-section flow area, water-surface top width, and reported mean velocity. Two regression equations are presented: (1) an expression for discharge and (2) an expression for mean velocity, both as functions of selected hydraulic attributes and watershed characteristics. Specifically, the discharge equation uses cross-sectional area, water-surface top width, contributing drainage area of the watershed, and mean annual precipitation of the location; the equation has an adjusted R-squared of approximately 0.95 and residual standard error of approximately 0.23 base-10 logarithm (cubic meters per second). The mean velocity equation uses discharge, water-surface top width, contributing drainage area, and mean annual precipitation; the equation has an adjusted R-squared of approximately 0.50 and residual standard error of approximately 0.087 third root (meters per second). Residual plots from both equations indicate that reliable estimates of discharge and mean velocity at ungauged stream sites are possible. Further, the relation between contributing drainage area and main-channel slope (a measure of whole-watershed slope) is depicted to aid analyst judgment of equation applicability for ungauged sites. Example applications and computations are provided and discussed within a real-world, discharge-measurement scenario, and an illustration of the development of a preliminary stage-discharge relation using the discharge equation is given.

  6. On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification

    PubMed Central

    Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.

    2014-01-01

    Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362

  7. Application of LA-MC-ICP-MS for analysis of Sr isotope ratios in speleothems

    NASA Astrophysics Data System (ADS)

    Weber, Michael; Scholz, Denis; Wassenburg, Jasper A.; Jochum, Klaus Peter; Breitenbach, Sebastian

    2017-04-01

    Speleothems are well established climate archives. In order to reconstruct past climate variability, several geochemical proxies, such as δ13C and δ18O as well as trace elements are available. Since several factors influence each individual proxy, robust interpretation is often hampered. This calls for multi-proxy approaches involving additional isotope systems that can help to delineate the role of different sources of water within the epikarst and changes in soil composition. Sr isotope ratios (87Sr/86Sr) have been shown to provide useful information about water residence time and water mixing in the host rock. Furthermore, Sr isotopes are not fractionated during calcite precipitation, implying that the 87Sr/86Sr ratio of the speleothem provides a direct record of the drip water. While most speleothem studies applying Sr isotopes used the TIMS methodology, LA-MC-ICP-MS has been utilized for several other archives, such as otoliths and teeth. This method provides the advantage of faster data acquisition, higher spatial resolution, larger sample throughput and the absence of chemical treatment prior to analysis. Here we present the first LA-MC-ICP-MS Sr isotope data for speleothems. The analytical uncertainty of our LA-MC-ICP-MS Sr data is in a similar range as for other carbonate materials. The results of different ablation techniques (i.e. line scan and spots) are reproducible within error, implying that the application of this technique on speleothems is possible. In addition, several comparative measurements of different carbonate reference materials (i.e. MACS-3, JCt-1, JCp-1), such as tests with standard bracketing and comparison of the 87Sr/86Sr ratios with nanosecond laser ablation system and a state-of-the-art femtosecond laser ablation system, show the robustness of the method. We applied the method to samples from Morocco (Grotte de Piste) and India (Mawmluh Cave). Our results show only very small changes in the 87Sr/86Sr ratios of both speleothems. However, one speleothem from Mawmluh Cave shows a slight increase of 87Sr/86Sr within the error, which is reproducible with line scans and spots.

  8. Spatial modeling on the upperstream of the Citarum watershed: An application of geoinformatics

    NASA Astrophysics Data System (ADS)

    Ningrum, Windy Setia; Widyaningsih, Yekti; Indra, Tito Latif

    2017-03-01

    The Citarum watershed is the longest and the largest watershed in West Java, Indonesia, located at 106°51'36''-107°51' E and 7°19'-6°24'S across 10 districts, and serves as the water supply for over 15 million people. In this area, the water criticality index is concerned to reach the balance between water supply and water demand, so that in the dry season, the watershed is still able to meet the water needs of the society along the Citarum river. The objective of this research is to evaluate the water criticality index of Citarum watershed area using spatial model to overcome the spatial dependencies in the data. The result of Lagrange multiplier diagnostics for spatial dependence results are LM-err = 34.6 (p-value = 4.1e-09) and LM-lag = 8.05 (p-value = 0.005), then modeling using Spatial Lag Model (SLM) and Spatial Error Model (SEM) were conducted. The likelihood ratio test show that both of SLM dan SEM model is better than OLS model in modeling water criticality index in Citarum watershed. The AIC value of SLM and SEM model are 78.9 and 51.4, then the SEM model is better than SLM model in predicting water criticality index in Citarum watershed.

  9. Paleotemperature reconstruction from mammalian phosphate δ18O records - an alternative view on data processing

    NASA Astrophysics Data System (ADS)

    Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej

    2017-04-01

    The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.

  10. Application of enzyme-linked immunosorbent assay for measurement of polychlorinated biphenyls from hydrophobic solutions: Extracts of fish and dialysates of semipermeable membrane devices: Chapter 26

    USGS Publications Warehouse

    Zajicek, James L.; Tillitt, Donald E.; Huckins, James N.; Petty, Jimmie D.; Potts, Michael E.; Nardone, David A.

    1996-01-01

    Determination of PCBs in biological tissue extracts by enzyme-linked immunosorbent assays (ELISAs) can be problematic, since the hydrophobic solvents used for their extraction and isolation from interfering biochemicals have limited compatibility with the polar solvents (e.g. methanol/water) and the immunochemical reagents used in ELISA. Our studies of these solvent effects indicate that significant errors can occur when microliter volumes of PCB containing extracts, in hydrophobic solvents, are diluted directly into methanol/water diluents. Errors include low recovery and excess variability among sub-samples taken from the same sample dilution. These errors are associated with inhomogeneity of the dilution, which is readily visualized by the use of a hydrophobic dye, Solvent Blue 35. Solvent Blue 35 is also used to visualize the evaporative removal of hydrophobic solvent and the dissolution of the resulting PCB/dye residue by pure methanol and 50% (v/v) methanol/water, typical ELISA diluents. Evaporative removal of isooctane by an ambient temperature nitrogen purge with subsequent dissolution in 100% methanol gives near quantitative recovery of model PCB congeners. We also compare concentrations of total PCBs from ELISA (ePCB) to their corresponding concentrations determined from capillary gas chromatography (GC) in selected fish sample extracts and dialysates of semipermeable membrane device (SPMD) passive samplers using an optimized solvent exchange procedure. Based on Aroclor 1254 calibrations, ePCBs (ng/mL) determined in fish extracts are positively correlated with total PCB concentrations (ng/mL) determined by GC: ePCB = 1.16 * total-cPCB - 5.92. Measured ePCBs (ng/3 SPMDs) were also positively correlated (r2 = 0.999) with PCB totals (ng/3 SPMDs) measured by GC for dialysates of SPMDs: ePCB = 1.52 * total PCB - 212. Therefore, this ELISA system for PCBs can be a rapid alternative to traditional GC analyses for determination of PCBs in extracts of biota or in SPMD dialysates.

  11. Approaches to stream solute load estimation for solutes with varying dynamics from five diverse small watershed

    USGS Publications Warehouse

    Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong

    2016-01-01

    Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.

  12. Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Yu, Chao; Cai, Miaomiao

    2017-04-01

    GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic

  13. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography

    DTIC Science & Technology

    1980-03-01

    interpreting/smoothing data containing a significant percentage of gross errors, and thus is ideally suited for applications in automated image ... analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of the paper describes the application of

  14. Application of a parameter-estimation technique to modeling the regional aquifer underlying the eastern Snake River plain, Idaho

    USGS Publications Warehouse

    Garabedian, Stephen P.

    1986-01-01

    A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.

  15. Effects of rain and fog on the Shuttle Ku-band microwave scanning beam landing system range and accuracy performance

    NASA Technical Reports Server (NTRS)

    Butler, D.

    1981-01-01

    The microwave Scanning Beam Landing System's (MSBLS) performance in fog and rain was studied. The fog and rain effects on the Shuttle Ku-band system were determined. Specifically, microwave attenuation, beam distortion, and coordinate errors resulting from operation of the MSBLS in poor weather conditions were evaluated. The main physical processes giving rise to microwave attenuation were found to be absorption and scattering by water droplets. The general theory of scattering and absorption used is discussed and a listing of applicable computer programs is provided.

  16. Remote estimation of colored dissolved organic matter and chlorophyll-a in Lake Huron using Sentinel-2 measurements

    NASA Astrophysics Data System (ADS)

    Chen, Jiang; Zhu, Weining; Tian, Yong Q.; Yu, Qian; Zheng, Yuhan; Huang, Litong

    2017-07-01

    Colored dissolved organic matter (CDOM) and chlorophyll-a (Chla) are important water quality parameters and play crucial roles in aquatic environment. Remote sensing of CDOM and Chla concentrations for inland lakes is often limited by low spatial resolution. The newly launched Sentinel-2 satellite is equipped with high spatial resolution (10, 20, and 60 m). Empirical band ratio models were developed to derive CDOM and Chla concentrations in Lake Huron. The leave-one-out cross-validation method was used for model calibration and validation. The best CDOM retrieval algorithm is a B3/B5 model with accuracy coefficient of determination (R2)=0.884, root-mean-squared error (RMSE)=0.731 m-1, relative root-mean-squared error (RRMSE)=28.02%, and bias=-0.1 m-1. The best Chla retrieval algorithm is a B5/B4 model with accuracy R2=0.49, RMSE=9.972 mg/m3, RRMSE=48.47%, and bias=-0.116 mg/m3. Neural network models were further implemented to improve inversion accuracy. The applications of the two best band ratio models to Sentinel-2 imagery with 10 m×10 m pixel size presented the high potential of the sensor for monitoring water quality of inland lakes.

  17. The Error Reporting in the ATLAS TDAQ System

    NASA Astrophysics Data System (ADS)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one of the available class constructors and send this instance to ERS. This paper presents the original design solutions exploited for the ERS implementation and describes how it was used during the first ATLAS run period. The cross-system error reporting standardization introduced by ERS was one of the key points for the successful implementation of automated mechanisms for online error recovery.

  18. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  19. Improved COD Measurements for Organic Content in Flowback Water with High Chloride Concentrations.

    PubMed

    Cardona, Isabel; Park, Ho Il; Lin, Lian-Shin

    2016-03-01

    An improved method was used to determine chemical oxygen demand (COD) as a measure of organic content in water samples containing high chloride content. A contour plot of COD percent error in the Cl(-)-Cl(-):COD domain showed that COD errors increased with Cl(-):COD. Substantial errors (>10%) could occur in low Cl(-):COD regions (<300) for samples with low (<10 g/L) and high chloride concentrations (>25 g/L). Applying the method to flowback water samples resulted in COD concentrations ranging in 130 to 1060 mg/L, which were substantially lower than the previously reported values for flowback water samples from Marcellus Shale (228 to 21 900 mg/L). It is likely that overestimations of COD in the previous studies occurred as result of chloride interferences. Pretreatment with mercuric sulfate, and use of a low-strength digestion solution, and the contour plot to correct COD measurements are feasible steps to significantly improve the accuracy of COD measurements.

  20. Self-shading associated with a skylight-blocked approach system for the measurement of water-leaving radiance and its correction.

    PubMed

    Shang, Zhehai; Lee, Zhongping; Dong, Qiang; Wei, Jianwei

    2017-09-01

    Self-shading associated with a skylight-blocked approach (SBA) system for the measurement of water-leaving radiance (L w ) and its correction [Appl. Opt.52, 1693 (2013)APOPAI0003-693510.1364/AO.52.001693] is characterized by Monte Carlo simulations, and it is found that this error is in a range of ∼1%-20% under most water properties and solar positions. A model for estimating this shading error is further developed, and eventually a scheme to correct this error based on the shaded measurements is proposed and evaluated. It is found that the shade-corrected value in the visible domain is within 3% of the true value, which thus indicates that we can obtain not only high precision but also high accuracy L w in the field with the SBA scheme.

  1. State-of-the-Art pH Electrode Quality Control for Measurements of Acidic, Low Ionic Strength Waters.

    ERIC Educational Resources Information Center

    Stapanian, Martin A.; Metcalf, Richard C.

    1990-01-01

    Described is the derivation of the relationship between the pH measurement error and the resulting percentage error in hydrogen ion concentration including the use of variable activity coefficients. The relative influence of the ionic strength of the solution on the percentage error is shown. (CW)

  2. The Use of IMMUs in a Water Environment: Instrument Validation and Application of 3D Multi-Body Kinematic Analysis in Medicine and Sport

    PubMed Central

    Mangia, Anna Lisa; Cortesi, Matteo; Fantozzi, Silvia; Giovanardi, Andrea; Borra, Davide; Gatta, Giorgio

    2017-01-01

    The aims of the present study were the instrumental validation of inertial-magnetic measurements units (IMMUs) in water, and the description of their use in clinical and sports aquatic applications applying customized 3D multi-body models. Firstly, several tests were performed to map the magnetic field in the swimming pool and to identify the best volume for experimental test acquisition with a mean dynamic orientation error lower than 5°. Successively, the gait and the swimming analyses were explored in terms of spatiotemporal and joint kinematics variables. The extraction of only spatiotemporal parameters highlighted several critical issues and the joint kinematic information has shown to be an added value for both rehabilitative and sport training purposes. Furthermore, 3D joint kinematics applied using the IMMUs provided similar quantitative information than that of more expensive and bulky systems but with a simpler and faster setup preparation, a lower time consuming processing phase, as well as the possibility to record and analyze a higher number of strides/strokes without limitations imposed by the cameras. PMID:28441739

  3. Integrated Path Differential Absorption Lidar Optimizations Based on Pre-Analyzed Atmospheric Data for ASCENDS Mission Applications

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S.

    2012-01-01

    In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.

  4. The Use of IMMUs in a Water Environment: Instrument Validation and Application of 3D Multi-Body Kinematic Analysis in Medicine and Sport.

    PubMed

    Mangia, Anna Lisa; Cortesi, Matteo; Fantozzi, Silvia; Giovanardi, Andrea; Borra, Davide; Gatta, Giorgio

    2017-04-22

    The aims of the present study were the instrumental validation of inertial-magnetic measurements units (IMMUs) in water, and the description of their use in clinical and sports aquatic applications applying customized 3D multi-body models. Firstly, several tests were performed to map the magnetic field in the swimming pool and to identify the best volume for experimental test acquisition with a mean dynamic orientation error lower than 5°. Successively, the gait and the swimming analyses were explored in terms of spatiotemporal and joint kinematics variables. The extraction of only spatiotemporal parameters highlighted several critical issues and the joint kinematic information has shown to be an added value for both rehabilitative and sport training purposes. Furthermore, 3D joint kinematics applied using the IMMUs provided similar quantitative information than that of more expensive and bulky systems but with a simpler and faster setup preparation, a lower time consuming processing phase, as well as the possibility to record and analyze a higher number of strides/strokes without limitations imposed by the cameras.

  5. The current and ideal state of anatomic pathology patient safety.

    PubMed

    Raab, Stephen Spencer

    2014-01-01

    An anatomic pathology diagnostic error may be secondary to a number of active and latent technical and/or cognitive components, which may occur anywhere along the total testing process in clinical and/or laboratory domains. For the pathologist interpretive steps of diagnosis, we examine Kahneman's framework of slow and fast thinking to explain different causes of error in precision (agreement) and in accuracy (truth). The pathologist cognitive diagnostic process involves image pattern recognition and a slow thinking error may be caused by the application of different rationally-constructed mental maps of image criteria/patterns by different pathologists. This type of error is partly related to a system failure in standardizing the application of these maps. A fast thinking error involves the flawed leap from image pattern to incorrect diagnosis. In the ideal state, anatomic pathology systems would target these cognitive error causes as well as the technical latent factors that lead to error.

  6. Purification of Logic-Qubit Entanglement

    PubMed Central

    Zhou, Lan; Sheng, Yu-Bo

    2016-01-01

    Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network. PMID:27377165

  7. Activity Tracking for Pilot Error Detection from Flight Data

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Ashford, Rose (Technical Monitor)

    2002-01-01

    This report presents an application of activity tracking for pilot error detection from flight data, and describes issues surrounding such an application. It first describes the Crew Activity Tracking System (CATS), in-flight data collected from the NASA Langley Boeing 757 Airborne Research Integrated Experiment System aircraft, and a model of B757 flight crew activities. It then presents an example of CATS detecting actual in-flight crew errors.

  8. A neural network for real-time retrievals of PWV and LWP from Arctic millimeter-wave ground-based observations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadeddu, M. P.; Turner, D. D.; Liljegren, J. C.

    2009-07-01

    This paper presents a new neural network (NN) algorithm for real-time retrievals of low amounts of precipitable water vapor (PWV) and integrated liquid water from millimeter-wave ground-based observations. Measurements are collected by the 183.3-GHz G-band vapor radiometer (GVR) operating at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility, Barrow, AK. The NN provides the means to explore the nonlinear regime of the measurements and investigate the physical boundaries of the operability of the instrument. A methodology to compute individual error bars associated with the NN output is developed, and a detailed error analysis of the network output is provided.more » Through the error analysis, it is possible to isolate several components contributing to the overall retrieval errors and to analyze the dependence of the errors on the inputs. The network outputs and associated errors are then compared with results from a physical retrieval and with the ARM two-channel microwave radiometer (MWR) statistical retrieval. When the NN is trained with a seasonal training data set, the retrievals of water vapor yield results that are comparable to those obtained from a traditional physical retrieval, with a retrieval error percentage of {approx}5% when the PWV is between 2 and 10 mm, but with the advantages that the NN algorithm does not require vertical profiles of temperature and humidity as input and is significantly faster computationally. Liquid water path (LWP) retrievals from the NN have a significantly improved clear-sky bias (mean of {approx}2.4 g/m{sup 2}) and a retrieval error varying from 1 to about 10 g/m{sup 2} when the PWV amount is between 1 and 10 mm. As an independent validation of the LWP retrieval, the longwave downwelling surface flux was computed and compared with observations. The comparison shows a significant improvement with respect to the MWR statistical retrievals, particularly for LWP amounts of less than 60 g/m{sup 2}.« less

  9. Assessment of Global Forecast Ocean Assimilation Model (FOAM) using new satellite SST data

    NASA Astrophysics Data System (ADS)

    Ascione Kenov, Isabella; Sykes, Peter; Fiedler, Emma; McConnell, Niall; Ryan, Andrew; Maksymczuk, Jan

    2016-04-01

    There is an increased demand for accurate ocean weather information for applications in the field of marine safety and navigation, water quality, offshore commercial operations, monitoring of oil spills and pollutants, among others. The Met Office, UK, provides ocean forecasts to customers from governmental, commercial and ecological sectors using the Global Forecast Ocean Assimilation Model (FOAM), an operational modelling system which covers the global ocean and runs daily, using the NEMO (Nucleus for European Modelling of the Ocean) ocean model with horizontal resolution of 1/4° and 75 vertical levels. The system assimilates salinity and temperature profiles, sea surface temperature (SST), sea surface height (SSH), and sea ice concentration observations on a daily basis. In this study, the FOAM system is updated to assimilate Advanced Microwave Scanning Radiometer 2 (AMSR2) and the Spinning Enhanced Visible and Infrared Imager (SEVIRI) SST data. Model results from one month trials are assessed against observations using verification tools which provide a quantitative description of model performance and error, based on statistical metrics, including mean error, root mean square error (RMSE), correlation coefficient, and Taylor diagrams. A series of hindcast experiments is used to run the FOAM system with AMSR2 and SEVIRI SST data, using a control run for comparison. Results show that all trials perform well on the global ocean and that largest SST mean errors were found in the Southern hemisphere. The geographic distribution of the model error for SST and temperature profiles are discussed using statistical metrics evaluated over sub-regions of the global ocean.

  10. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  11. Ensemble streamflow assimilation with the National Water Model.

    NASA Astrophysics Data System (ADS)

    Rafieeinasab, A.; McCreight, J. L.; Noh, S.; Seo, D. J.; Gochis, D.

    2017-12-01

    Through case studies of flooding across the US, we compare the performance of the National Water Model (NWM) data assimilation (DA) scheme to that of a newly implemented ensemble Kalman filter approach. The NOAA National Water Model (NWM) is an operational implementation of the community WRF-Hydro modeling system. As of August 2016, the NWM forecasts of distributed hydrologic states and fluxes (including soil moisture, snowpack, ET, and ponded water) over the contiguous United States have been publicly disseminated by the National Center for Environmental Prediction (NCEP) . It also provides streamflow forecasts at more than 2.7 million river reaches up to 30 days in advance. The NWM employs a nudging scheme to assimilate more than 6,000 USGS streamflow observations and provide initial conditions for its forecasts. A problem with nudging is how the forecasts relax quickly to open-loop bias in the forecast. This has been partially addressed by an experimental bias correction approach which was found to have issues with phase errors during flooding events. In this work, we present an ensemble streamflow data assimilation approach combining new channel-only capabilities of the NWM and HydroDART (a coupling of the offline WRF-Hydro model and NCAR's Data Assimilation Research Testbed; DART). Our approach focuses on the single model state of discharge and incorporates error distributions on channel-influxes (overland and groundwater) in the assimilation via an ensemble Kalman filter (EnKF). In order to avoid filter degeneracy associated with a limited number of ensemble at large scale, DART's covariance inflation (Anderson, 2009) and localization capabilities are implemented and evaluated. The current NWM data assimilation scheme is compared to preliminary results from the EnKF application for several flooding case studies across the US.

  12. Combination of Complex-Based and Magnitude-Based Multiecho Water-Fat Separation for Accurate Quantification of Fat-Fraction

    PubMed Central

    Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724

  13. System reliability and recovery.

    DOT National Transportation Integrated Search

    1971-06-01

    The paper exhibits a variety of reliability techniques applicable to future ATC data processing systems. Presently envisioned schemes for error detection, error interrupt and error analysis are considered, along with methods of retry, reconfiguration...

  14. Characterization of Turbulent Processes by the Raman Lidar System Basil in the Frame of the HD(CP)2 Observational Prototype Experiment - Hope

    NASA Astrophysics Data System (ADS)

    Di Girolamo, Paolo; Summa, Donato; Stelitano, Dario; Cacciani, Marco; Scoccione, Andrea; Behrendt, Andreas; Wulfmeyer, Volker

    2016-06-01

    Measurements carried out by the Raman lidar system BASIL are reported to demonstrate the capability of this instrument to characterize turbulent processes within the Convective Boundary Layer (CBL). In order to resolve the vertical profiles of turbulent variables, high resolution water vapour and temperature measurements, with a temporal resolution of 10 sec and a vertical resolution of 90 and 210 m, respectively, are considered. Measurements of higher-order moments of the turbulent fluctuations of water vapour mixing ratio and temperature are obtained based on the application of spectral and auto-covariance analyses to the water vapour mixing ratio and temperature time series. The algorithms are applied to a case study (IOP 5, 20 April 2013) from the HD(CP)2 Observational Prototype Experiment (HOPE), held in Central Germany in the spring 2013. The noise errors are demonstrated to be small enough to allow the derivation of up to fourth-order moments for both water vapour mixing ratio and temperature fluctuations with sufficient accuracy.

  15. Mapping global surface water inundation dynamics using synergistic information from SMAP, AMSR2 and Landsat

    NASA Astrophysics Data System (ADS)

    Du, J.; Kimball, J. S.; Galantowicz, J. F.; Kim, S.; Chan, S.; Reichle, R. H.; Jones, L. A.; Watts, J. D.

    2017-12-01

    A method to monitor global land surface water (fw) inundation dynamics was developed by exploiting the enhanced fw sensitivity of L-band (1.4 GHz) passive microwave observations from the Soil Moisture Active Passive (SMAP) mission. The L-band fw (fwLBand) retrievals were derived using SMAP H-polarization brightness temperature (Tb) observations and predefined L-band reference microwave emissivities for water and land endmembers. Potential soil moisture and vegetation contributions to the microwave signal were represented from overlapping higher frequency Tb observations from AMSR2. The resulting fwLBand global record has high temporal sampling (1-3 days) and 36-km spatial resolution. The fwLBand annual averages corresponded favourably (R=0.84, p<0.001) with a 250-m resolution static global water map (MOD44W) aggregated at the same spatial scale, while capturing significant inundation variations worldwide. The monthly fwLBand averages also showed seasonal inundation changes consistent with river discharge records within six major US river basins. An uncertainty analysis indicated generally reliable fwLBand performance for major land cover areas and under low to moderate vegetation cover, but with lower accuracy for detecting water bodies covered by dense vegetation. Finer resolution (30-m) fwLBand results were obtained for three sub-regions in North America using an empirical downscaling approach and ancillary global Water Occurrence Dataset (WOD) derived from the historical Landsat record. The resulting 30-m fwLBand retrievals showed favourable classification accuracy for water (commission error 31.84%; omission error 28.08%) and land (commission error 0.82%; omission error 0.99%) and seasonal wet and dry periods when compared to independent water maps derived from Landsat-8 imagery. The new fwLBand algorithms and continuing SMAP and AMSR2 operations provide for near real-time, multi-scale monitoring of global surface water inundation dynamics, potentially benefiting hydrological monitoring, flood assessments, and global climate and carbon modeling.

  16. Total energy expenditure in burned children using the doubly labeled water technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goran, M.I.; Peters, E.J.; Herndon, D.N.

    Total energy expenditure (TEE) was measured in 15 burned children with the doubly labeled water technique. Application of the technique in burned children required evaluation of potential errors resulting from nutritional intake altering background enrichments during studies and from the high rate of water turnover relative to CO2 production. Five studies were discarded because of these potential problems. TEE was 1.33 +/- 0.27 times predicted basal energy expenditure (BEE), and in studies where resting energy expenditure (REE) was simultaneously measured, TEE was 1.18 +/- 0.17 times REE, which in turn was 1.16 +/- 0.10 times predicted BEE. TEE was significantlymore » correlated with measured REE (r2 = 0.92) but not with predicted BEE. These studies substantiate the advantage of measuring REE to predict TEE in severely burned patients as opposed to relying on standardized equations. Therefore we recommend that optimal nutritional support will be achieved in convalescent burned children by multiplying REE by an activity factor of 1.2.« less

  17. Acoustothermometric study of the human hand under hyperthrmia and hypothermia

    NASA Astrophysics Data System (ADS)

    Anosov, A. A.; Belyaev, R. V.; Vilkov, V. A.; Dvornikova, M. V.; Dvornikova, V. V.; Kazanskii, A. S.; Kuryatnikova, N. A.; Mansfel'd, A. D.

    2013-01-01

    The results of an acoustothermometric study of the human hand under local hyperthermia and hypothermia are presented. Individuals under testing plunged their hands in hot or cold water for several minutes. Thermal acoustic radiation was detected by two sensors placed near the palm and near the backside of the tested hand. The internal temperature profiles of the hand were reconstructed. The indirect estimate of the reconstruction error was 0.6°C, which is acceptable for medical applications. Hyperthermia was achieved by placing the hand in water with a maximal temperature of 44°C for 2 min. In this case, the internal temperature was 35.4 ± 0.6°C. Hypothermia was achieved by placing the hand in water with a temperature of 17.8°C for 15 min. In this case, the internal temperature decreased from 26 to 24°C. The use of a four-sensor planar receiving array allowed dynamic mapping of the acoustic brightness temperature of the hand.

  18. A controlled statistical study to assess measurement variability as a function of test object position and configuration for automated surveillance in a multicenter longitudinal COPD study (SPIROMICS).

    PubMed

    Guo, Junfeng; Wang, Chao; Chan, Kung-Sik; Jin, Dakai; Saha, Punam K; Sieren, Jered P; Barr, R G; Han, MeiLan K; Kazerooni, Ella; Cooper, Christopher B; Couper, David; Newell, John D; Hoffman, Eric A

    2016-05-01

    A test object (phantom) is an important tool to evaluate comparability and stability of CT scanners used in multicenter and longitudinal studies. However, there are many sources of error that can interfere with the test object-derived quantitative measurements. Here the authors investigated three major possible sources of operator error in the use of a test object employed to assess pulmonary density-related as well as airway-related metrics. Two kinds of experiments were carried out to assess measurement variability caused by imperfect scanning status. The first one consisted of three experiments. A COPDGene test object was scanned using a dual source multidetector computed tomographic scanner (Siemens Somatom Flash) with the Subpopulations and Intermediate Outcome Measures in COPD Study (SPIROMICS) inspiration protocol (120 kV, 110 mAs, pitch = 1, slice thickness = 0.75 mm, slice spacing = 0.5 mm) to evaluate the effects of tilt angle, water bottle offset, and air bubble size. After analysis of these results, a guideline was reached in order to achieve more reliable results for this test object. Next the authors applied the above findings to 2272 test object scans collected over 4 years as part of the SPIROMICS study. The authors compared changes of the data consistency before and after excluding the scans that failed to pass the guideline. This study established the following limits for the test object: tilt index ≤0.3, water bottle offset limits of [-6.6 mm, 7.4 mm], and no air bubble within the water bottle, where tilt index is a measure incorporating two tilt angles around x- and y-axis. With 95% confidence, the density measurement variation for all five interested materials in the test object (acrylic, water, lung, inside air, and outside air) resulting from all three error sources can be limited to ±0.9 HU (summed in quadrature), when all the requirements are satisfied. The authors applied these criteria to 2272 SPIROMICS scans and demonstrated a significant reduction in measurement variation associated with the test object. Three operator errors were identified which significantly affected the usability of the acquired scan images of the test object used for monitoring scanner stability in a multicenter study. The authors' results demonstrated that at the time of test object scan receipt at a radiology core laboratory, quality control procedures should include an assessment of tilt index, water bottle offset, and air bubble size within the water bottle. Application of this methodology to 2272 SPIROMICS scans indicated that their findings were not limited to the scanner make and model used for the initial test but was generalizable to both Siemens and GE scanners which comprise the scanner types used within the SPIROMICS study.

  19. A controlled statistical study to assess measurement variability as a function of test object position and configuration for automated surveillance in a multicenter longitudinal COPD study (SPIROMICS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Junfeng; Newell, John D.; Wang, Chao

    Purpose: A test object (phantom) is an important tool to evaluate comparability and stability of CT scanners used in multicenter and longitudinal studies. However, there are many sources of error that can interfere with the test object-derived quantitative measurements. Here the authors investigated three major possible sources of operator error in the use of a test object employed to assess pulmonary density-related as well as airway-related metrics. Methods: Two kinds of experiments were carried out to assess measurement variability caused by imperfect scanning status. The first one consisted of three experiments. A COPDGene test object was scanned using a dualmore » source multidetector computed tomographic scanner (Siemens Somatom Flash) with the Subpopulations and Intermediate Outcome Measures in COPD Study (SPIROMICS) inspiration protocol (120 kV, 110 mAs, pitch = 1, slice thickness = 0.75 mm, slice spacing = 0.5 mm) to evaluate the effects of tilt angle, water bottle offset, and air bubble size. After analysis of these results, a guideline was reached in order to achieve more reliable results for this test object. Next the authors applied the above findings to 2272 test object scans collected over 4 years as part of the SPIROMICS study. The authors compared changes of the data consistency before and after excluding the scans that failed to pass the guideline. Results: This study established the following limits for the test object: tilt index ≤0.3, water bottle offset limits of [−6.6 mm, 7.4 mm], and no air bubble within the water bottle, where tilt index is a measure incorporating two tilt angles around x- and y-axis. With 95% confidence, the density measurement variation for all five interested materials in the test object (acrylic, water, lung, inside air, and outside air) resulting from all three error sources can be limited to ±0.9 HU (summed in quadrature), when all the requirements are satisfied. The authors applied these criteria to 2272 SPIROMICS scans and demonstrated a significant reduction in measurement variation associated with the test object. Conclusions: Three operator errors were identified which significantly affected the usability of the acquired scan images of the test object used for monitoring scanner stability in a multicenter study. The authors’ results demonstrated that at the time of test object scan receipt at a radiology core laboratory, quality control procedures should include an assessment of tilt index, water bottle offset, and air bubble size within the water bottle. Application of this methodology to 2272 SPIROMICS scans indicated that their findings were not limited to the scanner make and model used for the initial test but was generalizable to both Siemens and GE scanners which comprise the scanner types used within the SPIROMICS study.« less

  20. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  1. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  2. Spectral contaminant identifier for off-axis integrated cavity output spectroscopy measurements of liquid water isotopes

    NASA Astrophysics Data System (ADS)

    Brian Leen, J.; Berman, Elena S. F.; Liebson, Lindsay; Gupta, Manish

    2012-04-01

    Developments in cavity-enhanced absorption spectrometry have made it possible to measure water isotopes using faster, more cost-effective field-deployable instrumentation. Several groups have attempted to extend this technology to measure water extracted from plants and found that other extracted organics absorb light at frequencies similar to that absorbed by the water isotopomers, leading to δ2H and δ18O measurement errors (Δδ2H and Δδ18O). In this note, the off-axis integrated cavity output spectroscopy (ICOS) spectra of stable isotopes in liquid water is analyzed to determine the presence of interfering absorbers that lead to erroneous isotope measurements. The baseline offset of the spectra is used to calculate a broadband spectral metric, mBB, and the mean subtracted fit residuals in two regions of interest are used to determine a narrowband metric, mNB. These metrics are used to correct for Δδ2H and Δδ18O. The method was tested on 14 instruments and Δδ18O was found to scale linearly with contaminant concentration for both narrowband (e.g., methanol) and broadband (e.g., ethanol) absorbers, while Δδ2H scaled linearly with narrowband and as a polynomial with broadband absorbers. Additionally, the isotope errors scaled logarithmically with mNB. Using the isotope error versus mNB and mBB curves, Δδ2H and Δδ18O resulting from methanol contamination were corrected to a maximum mean absolute error of 0.93 ‰ and 0.25 ‰ respectively, while Δδ2H and Δδ18O from ethanol contamination were corrected to a maximum mean absolute error of 1.22 ‰ and 0.22 ‰. Large variation between instruments indicates that the sensitivities must be calibrated for each individual isotope analyzer. These results suggest that the properly calibrated interference metrics can be used to correct for polluted samples and extend off-axis ICOS measurements of liquid water to include plant waters, soil extracts, wastewater, and alcoholic beverages. The general technique may also be extended to other laser-based analyzers including methane and carbon dioxide isotope sensors.

  3. Real-time hydraulic interval state estimation for water transport networks: a case study

    NASA Astrophysics Data System (ADS)

    Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.

    2018-03-01

    Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.

  4. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  5. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  6. Determination of Tide Heights from Airborne Bathymetric Data

    DTIC Science & Technology

    1989-12-01

    MEASUREMENT ERROR, A FORTRAN FUNCTION ............................. 60 v D. THE MEASUREMENT ERROR .......................... 61 E. THE REFERENCE PLANE ...soundings made to a chart datum. The chart datum is a "tide based" plane which usually corresponds to some mean of the low waters for the local tidal...regime. A low water plane is used so depths published on a nautical chart are shown in their least favorable aspect. If the chart datum is very

  7. Error measuring system of rotary Inductosyn

    NASA Astrophysics Data System (ADS)

    Liu, Chengjun; Zou, Jibin; Fu, Xinghe

    2008-10-01

    The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).

  8. New Examination of the Traditional Raman Lidar Technique II: Evaluating the Ratios for Water Vapor and Aerosols

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.

    2003-01-01

    In a companion paper, the temperature dependence of Raman scattering and its influence on the Raman and Rayleigh-Mie lidar equations was examined. New forms of the lidar equation were developed to account for this temperature sensitivity. Here those results are used to derive the temperature dependent forms of the equations for the water vapor mixing ratio, aerosol scattering ratio, aerosol backscatter coefficient, and extinction to backscatter ratio (Sa). The error equations are developed, the influence of differential transmission is studied and different laser sources are considered in the analysis. The results indicate that the temperature functions become significant when using narrowband detection. Errors of 5% and more can be introduced in the water vapor mixing ratio calculation at high altitudes and errors larger than 10% are possible for calculations of aerosol scattering ratio and thus aerosol backscatter coefficient and extinction to backscatter ratio.

  9. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

  10. A comparison of two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund

    NASA Astrophysics Data System (ADS)

    Luks, B.; Osuch, M.; Romanowicz, R. J.

    2012-04-01

    We compare two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund. In the first approach we apply physically-based Utah Energy Balance Snow Accumulation and Melt Model (UEB) (Tarboton et al., 1995; Tarboton and Luce, 1996). The model uses a lumped representation of the snowpack with two primary state variables: snow water equivalence and energy. Its main driving inputs are: air temperature, precipitation, wind speed, humidity and radiation (estimated from the diurnal temperature range). Those variables are used for physically-based calculations of radiative, sensible, latent and advective heat exchanges with a 3 hours time step. The second method is an application of a statistically efficient lumped parameter time series approach to modelling the dynamics of snow cover , based on daily meteorological measurements from the same area. A dynamic Stochastic Transfer Function model is developed that follows the Data Based Mechanistic approach, where a stochastic data-based identification of model structure and an estimation of its parameters are followed by a physical interpretation. We focus on the analysis of uncertainty of both model outputs. In the time series approach, the applied techniques also provide estimates of the modeling errors and the uncertainty of the model parameters. In the first, physically-based approach the applied UEB model is deterministic. It assumes that the observations are without errors and that the model structure perfectly describes the processes within the snowpack. To take into account the model and observation errors, we applied a version of the Generalized Likelihood Uncertainty Estimation technique (GLUE). This technique also provide estimates of the modelling errors and the uncertainty of the model parameters. The observed snowpack water equivalent values are compared with those simulated with 95% confidence bounds. This work was supported by National Science Centre of Poland (grant no. 7879/B/P01/2011/40). Tarboton, D. G., T. G. Chowdhury and T. H. Jackson, 1995. A Spatially Distributed Energy Balance Snowmelt Model. In K. A. Tonnessen, M. W. Williams and M. Tranter (Ed.), Proceedings of a Boulder Symposium, July 3-14, IAHS Publ. no. 228, pp. 141-155. Tarboton, D. G. and C. H. Luce, 1996. Utah Energy Balance Snow Accumulation and Melt Model (UEB). Computer model technical description and users guide, Utah Water Research Laboratory and USDA Forest Service Intermountain Research Station (http://www.engineering.usu.edu/dtarb/). 64 pp.

  11. Application of artificial neural networks for the prediction of volume fraction using spectra of gamma rays backscattered by three-phase flows

    NASA Astrophysics Data System (ADS)

    Gholipour Peyvandi, R.; Islami Rad, S. Z.

    2017-12-01

    The determination of the volume fraction percentage of the different phases flowing in vessels using transmission gamma rays is a conventional method in petroleum and oil industries. In some cases, with access only to the one side of the vessels, attention was drawn toward backscattered gamma rays as a desirable choice. In this research, the volume fraction percentage was measured precisely in water-gasoil-air three-phase flows by using the backscatter gamma ray technique andthe multilayer perceptron (MLP) neural network. The volume fraction determination in three-phase flows requires two gamma radioactive sources or a dual-energy source (with different energies) while in this study, we used just a 137Cs source (with the single energy) and a NaI detector to analyze backscattered gamma rays. The experimental set-up provides the required data for training and testing the network. Using the presented method, the volume fraction was predicted with a mean relative error percentage less than 6.47%. Also, the root mean square error was calculated as 1.60. The presented set-up is applicable in some industries with limited access. Also, using this technique, the cost, radiation safety and shielding requirements are minimized toward the other proposed methods.

  12. Improving medium-range ensemble streamflow forecasts through statistical post-processing

    NASA Astrophysics Data System (ADS)

    Mendoza, Pablo; Wood, Andy; Clark, Elizabeth; Nijssen, Bart; Clark, Martyn; Ramos, Maria-Helena; Nowak, Kenneth; Arnold, Jeffrey

    2017-04-01

    Probabilistic hydrologic forecasts are a powerful source of information for decision-making in water resources operations. A common approach is the hydrologic model-based generation of streamflow forecast ensembles, which can be implemented to account for different sources of uncertainties - e.g., from initial hydrologic conditions (IHCs), weather forecasts, and hydrologic model structure and parameters. In practice, hydrologic ensemble forecasts typically have biases and spread errors stemming from errors in the aforementioned elements, resulting in a degradation of probabilistic properties. In this work, we compare several statistical post-processing techniques applied to medium-range ensemble streamflow forecasts obtained with the System for Hydromet Applications, Research and Prediction (SHARP). SHARP is a fully automated prediction system for the assessment and demonstration of short-term to seasonal streamflow forecasting applications, developed by the National Center for Atmospheric Research, University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. The suite of post-processing techniques includes linear blending, quantile mapping, extended logistic regression, quantile regression, ensemble analogs, and the generalized linear model post-processor (GLMPP). We assess and compare these techniques using multi-year hindcasts in several river basins in the western US. This presentation discusses preliminary findings about the effectiveness of the techniques for improving probabilistic skill, reliability, discrimination, sharpness and resolution.

  13. Application of Biologically-Based Lumping To Investigate the ...

    EPA Pesticide Factsheets

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these experiments, though this simplification provides little insight into the impact of a mixture's chemical composition on toxicologically-relevant metabolic interactions that may occur among its constituents. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically-based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate performance of our PBPK model. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course kinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for the 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 non-target chemicals. Application of this biologic

  14. Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…

  15. Fault-Tolerant Computing: An Overview

    DTIC Science & Technology

    1991-06-01

    Addison Wesley:, Reading, MA) 1984. [8] J. Wakerly , Error Detecting Codes, Self-Checking Circuits and Applications , (Elsevier North Holland, Inc.- New York... applicable to bit-sliced organi- zations of hardware. In the first time step, the normal computation is performed on the operands and the results...for error detection and fault tolerance in parallel processor systems while perform- ing specific computation-intensive applications [111. Contrary to

  16. Application of a Geographic Information System for regridding a ground-water flow model of the Columbia Plateau Regional Aquifer System, Walla Walla River basin, Oregon-Washington

    USGS Publications Warehouse

    Darling, M.E.; Hubbard, L.E.

    1994-01-01

    Computerized Geographic Information Systems (GIS) have become viable and valuable tools for managing,analyzing, creating, and displaying data for three-dimensional finite-difference ground-water flow models. Three GIS applications demonstrated in this study are: (1) regridding of data arrays from an existing large-area, low resolution ground-water model to a smaller, high resolution grid; (2) use of GIS techniques for assembly of data-input arrays for a ground-water model; and (3) use of GIS for rapid display of data for verification, for checking of ground-water model output, and for the cre.ation of customized maps for use in reports. The Walla Walla River Basin was selected as the location for the demonstration because (1) data from a low resolution ground-water model (Columbia Plateau Regional Aquifer System Analysis [RASA]) were available and (2) concern for long-term use of water resources for irrigation in the basin. The principal advantage of regridding is that it may provide the ability to more precisely calibrate a model, assuming chat a more detailed coverage of data is available, and to evaluate the numerical errors associated with a particular grid design.Regridding gave about an 8-fold increase in grid-node density.Several FORTRAN programs were developed to load the regridded ground-water data into a finite-difference modular model as model-compatible input files for use in a steady-state model run.To facilitate the checking and validating of the GIS regridding process, maps and tabular reports were produced for each of eight ground-water parameters by model layer. Also, an automated subroutine that was developed to view the model-calculated water levels in cross-section will aid in the synthesis and interpretation of model results.

  17. An International Survey of Industrial Applications of Formal Methods. Volume 2. Case Studies

    DTIC Science & Technology

    1993-09-30

    impact of the product on IBM revenues. 4. Error rates were claimed to be below industrial average and errors were minimal to fix. Formal methods, as...critical applications. These include: 3 I I International Survey of Industrial Applications 41 i) "Software failures, particularly under first use, seem...project to add improved modelling capability. I U International Survey of Industrial Applications 93 I Design and Implementation These products are being

  18. Multi-temporal AirSWOT elevations on the Willamette river: error characterization and algorithm testing

    NASA Astrophysics Data System (ADS)

    Tuozzolo, S.; Frasson, R. P. M.; Durand, M. T.

    2017-12-01

    We analyze a multi-temporal dataset of in-situ and airborne water surface measurements from the March 2015 AirSWOT field campaign on the Willamette River in Western Oregon, which included six days of AirSWOT flights over a 75km stretch of the river. We examine systematic errors associated with dark water and layover effects in the AirSWOT dataset, and test the efficacies of different filtering and spatial averaging techniques at reconstructing the water surface profile. Finally, we generate a spatially-averaged time-series of water surface elevation and water surface slope. These AirSWOT-derived reach-averaged values are ingested in a prospective SWOT discharge algorithm to assess its performance on SWOT-like data collected from a borderline SWOT-measurable river (mean width = 90m).

  19. Assessing and measuring wetland hydrology

    USGS Publications Warehouse

    Rosenberry, Donald O.; Hayashi, Masaki; Anderson, James T.; Davis, Craig A.

    2013-01-01

    Virtually all ecological processes that occur in wetlands are influenced by the water that flows to, from, and within these wetlands. This chapter provides the “how-to” information for quantifying the various source and loss terms associated with wetland hydrology. The chapter is organized from a water-budget perspective, with sections associated with each of the water-budget components that are common in most wetland settings. Methods for quantifying the water contained within the wetland are presented first, followed by discussion of each separate component. Measurement accuracy and sources of error are discussed for each of the methods presented, and a separate section discusses the cumulative error associated with determining a water budget for a wetland. Exercises and field activities will provide hands-on experience that will facilitate greater understanding of these processes.

  20. Uncertainty Propagation of Non-Parametric-Derived Precipitation Estimates into Multi-Hydrologic Model Simulations

    NASA Astrophysics Data System (ADS)

    Bhuiyan, M. A. E.; Nikolopoulos, E. I.; Anagnostou, E. N.

    2017-12-01

    Quantifying the uncertainty of global precipitation datasets is beneficial when using these precipitation products in hydrological applications, because precipitation uncertainty propagation through hydrologic modeling can significantly affect the accuracy of the simulated hydrologic variables. In this research the Iberian Peninsula has been used as the study area with a study period spanning eleven years (2000-2010). This study evaluates the performance of multiple hydrologic models forced with combined global rainfall estimates derived based on a Quantile Regression Forests (QRF) technique. In QRF technique three satellite precipitation products (CMORPH, PERSIANN, and 3B42 (V7)); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset are being utilized in this study. A high-resolution, ground-based observations driven precipitation dataset (named SAFRAN) available at 5 km/1 h resolution is used as reference. Through the QRF blending framework the stochastic error model produces error-adjusted ensemble precipitation realizations, which are used to force four global hydrological models (JULES (Joint UK Land Environment Simulator), WaterGAP3 (Water-Global Assessment and Prognosis), ORCHIDEE (Organizing Carbon and Hydrology in Dynamic Ecosystems) and SURFEX (Stands for Surface Externalisée) ) to simulate three hydrologic variables (surface runoff, subsurface runoff and evapotranspiration). The models are forced with the reference precipitation to generate reference-based hydrologic simulations. This study presents a comparative analysis of multiple hydrologic model simulations for different hydrologic variables and the impact of the blending algorithm on the simulated hydrologic variables. Results show how precipitation uncertainty propagates through the different hydrologic model structures to manifest in reduction of error in hydrologic variables.

  1. Extending high-order flux operators on spherical icosahedral grids and their application in a Shallow Water Model for transporting the Potential Vorticity

    NASA Astrophysics Data System (ADS)

    Zhang, Y.

    2017-12-01

    The unstructured formulation of the third/fourth-order flux operators used by the Advanced Research WRF is extended twofold on spherical icosahedral grids. First, the fifth- and sixth-order flux operators of WRF are further extended, and the nominally second- to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. The fifth-order scheme also exhibits smaller sensitivity to the damping coefficient than the third-order scheme. Overall, the even-order schemes have higher limiter sensitivity than the odd-order schemes. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second- and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, flux operators with higher damping coefficients, which essentially behaves like the Anticipated Potential Vorticity Method, present optimal results.

  2. Atmospheric Phase Delay in Sentinel SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Krishnakumar, V.; Monserrat, O.; Crosetto, M.; Crippa, B.

    2018-04-01

    The repeat-pass Synthetic Aperture Radio Detection and Ranging (RADAR) Interferometry (InSAR) has been a widely used geodetic technique for observing the Earth's surface, especially for mapping the Earth's topography and deformations. However, InSAR measurements are prone to atmospheric errors. RADAR waves traverse the Earth's atmosphere twice and experience a delay due to atmospheric refraction. The two major layers of the atmosphere (troposphere and ionosphere) are mainly responsible for this delay in the propagating RADAR wave. Previous studies have shown that water vapour and clouds present in the troposphere and the Total Electron Content (TEC) of the ionosphere are responsible for the additional path delay in the RADAR wave. The tropospheric refractivity is mainly dependent on pressure, temperature and partial pressure of water vapour. The tropospheric refractivity leads to an increase in the observed range. These induced propagation delays affect the quality of phase measurement and introduce errors in the topography and deformation fields. The effect of this delay was studied on a differential interferogram (DInSAR). To calculate the amount of tropospheric delay occurred, the meteorological data collected from the Spanish Agencia Estatal de Meteorología (AEMET) and MODIS were used. The interferograms generated from Sentinel-1 carrying C-band Synthetic Aperture RADAR Single Look Complex (SLC) images acquired on the study area are used. The study area consists of different types of scatterers exhibiting different coherence. The existing Saastamoinen model was used to perform a quantitative evaluation of the phase changes caused by pressure, temperature and humidity of the troposphere during the study. Unless the phase values due to atmospheric disturbances are not corrected, it is difficult to obtain accurate measurements. Thus, the atmospheric error correction is essential for all practical applications of DInSAR to avoid inaccurate height and deformation measurements.

  3. Validation of Globsnow-2 Snow Water Equivalent Over Eastern Canada

    NASA Technical Reports Server (NTRS)

    Larue, Fanny; Royer, Alain; De Seve, Danielle; Langlois, Alexandre; Roy, Alexandre R.; Brucker, Ludovic

    2017-01-01

    In Qubec, Eastern Canada, snowmelt runoff contributes more than 30% of the annual energy reserve for hydroelectricity production, and uncertainties in annual maximum snow water equivalent (SWE) over the region are one of the main constraints for improved hydrological forecasting. Current satellite-based methods for mapping SWE over Qubec's main hydropower basins do not meet Hydro-Qubec operational requirements for SWE accuracies with less than 15% error. This paper assesses the accuracy of the GlobSnow-2 (GS-2) SWE product, which combines microwave satellite data and in situ measurements, for hydrological applications in Qubec. GS-2 SWE values for a 30-year period (1980 to 2009) were compared with space- and time-matched values from a comprehensive dataset of in situ SWE measurements (a total of 38,990 observations in Eastern Canada). The root mean square error (RMSE) of the GS-2 SWE product is 94.1+/- 20.3 mm, corresponding to an overall relative percentage error (RPE) of 35.9%. The main sources of uncertainty are wet and deep snow conditions (when SWE is higher than 150 mm), and forest cover type. However, compared to a typical stand-alone brightness temperature channel difference algorithm, the assimilation of surface information in the GS-2 algorithm clearly improves SWE accuracy by reducing the RPE by about 30%. Comparison of trends in annual mean and maximum SWE between surface observations and GS-2 over 1980-2009 showed agreement for increasing trends over southern Qubec, but less agreement on the sign and magnitude of trends over northern Qubec. Extended at a continental scale, the GS-2 SWE trends highlight a strong regional variability.

  4. Neural Networks for Hydrological Modeling Tool for Operational Purposes

    NASA Astrophysics Data System (ADS)

    Bhatt, Divya; Jain, Ashu

    2010-05-01

    Hydrological models are useful in many water resources applications such as flood control, irrigation and drainage, hydro power generation, water supply, erosion and sediment control, etc. Estimates of runoff are needed in many water resources planning, design development, operation and maintenance activities. Runoff is generally computed using rainfall-runoff models. Computer based hydrologic models have become popular for obtaining hydrological forecasts and for managing water systems. Rainfall-runoff library (RRL) is computer software developed by Cooperative Research Centre for Catchment Hydrology (CRCCH), Australia consisting of five different conceptual rainfall-runoff models, and has been in operation in many water resources applications in Australia. Recently, soft artificial intelligence tools such as Artificial Neural Networks (ANNs) have become popular for research purposes but have not been adopted in operational hydrological forecasts. There is a strong need to develop ANN models based on real catchment data and compare them with the conceptual models actually in use in real catchments. In this paper, the results from an investigation on the use of RRL and ANNs are presented. Out of the five conceptual models in the RRL toolkit, SimHyd model has been used. Genetic Algorithm has been used as an optimizer in the RRL to calibrate the SimHyd model. Trial and error procedures were employed to arrive at the best values of various parameters involved in the GA optimizer to develop the SimHyd model. The results obtained from the best configuration of the SimHyd model are presented here. Feed-forward neural network model structure trained by back-propagation training algorithm has been adopted here to develop the ANN models. The daily rainfall and runoff data derived from Bird Creek Basin, Oklahoma, USA have been employed to develop all the models included here. A wide range of error statistics have been used to evaluate the performance of all the models developed in this study. The ANN models developed consistently outperformed the conceptual model developed in this study. The results obtained in this study indicate that the ANNs can be extremely useful tools for modeling the complex rainfall-runoff process in real catchments. The ANNs should be adopted in real catchments for hydrological modeling and forecasting. It is hoped that more research will be carried out to compare the performance of ANN model with the conceptual models actually in use at catchment scales. It is hoped that such efforts may go a long way in making the ANNs more acceptable by the policy makers, water resources decision makers, and traditional hydrologists.

  5. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations

    PubMed Central

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028

  6. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.

    PubMed

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.

  7. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  8. MODIS-derived spatiotemporal water clarity patterns in optically shallow FloridaKeys waters: A new approach to remove bottom contamination

    EPA Science Inventory

    Retrievals of water quality parameters from satellite measurements over optically shallow waters have been problematic due to bottom contamination of the signals. As a result, large errors are associated with derived water column properties. These deficiencies greatly reduce the ...

  9. The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System

    NASA Astrophysics Data System (ADS)

    Lin, M.

    2016-12-01

    Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line to calibrate. GPS Latency is synchronized GPS to echo sounder. Future studies concerning any shallower portion of an area, by this procedure can be more accurate sounding value and can do more detailed research.

  10. Residual volume on land and when immersed in water: effect on percent body fat.

    PubMed

    Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu

    2006-08-01

    There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P < 0.05). The limits of agreement for residual volumes in both conditions using Bland-Altman plots were -0.430 to 0.508 litres. This range was larger than the trial-to-trial error of residual volume on land (-0.260 to 0.304 litres). Moreover, the relationship between percent body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P < 0.0001), and the errors were approximately -6 to 4% (limits of agreement for percent body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.

  11. Understanding diagnostic errors in medicine: a lesson from aviation

    PubMed Central

    Singh, H; Petersen, L A; Thomas, E J

    2006-01-01

    The impact of diagnostic errors on patient safety in medicine is increasingly being recognized. Despite the current progress in patient safety research, the understanding of such errors and how to prevent them is inadequate. Preliminary research suggests that diagnostic errors have both cognitive and systems origins. Situational awareness is a model that is primarily used in aviation human factors research that can encompass both the cognitive and the systems roots of such errors. This conceptual model offers a unique perspective in the study of diagnostic errors. The applicability of this model is illustrated by the analysis of a patient whose diagnosis of spinal cord compression was substantially delayed. We suggest how the application of this framework could lead to potential areas of intervention and outline some areas of future research. It is possible that the use of such a model in medicine could help reduce errors in diagnosis and lead to significant improvements in patient care. Further research is needed, including the measurement of situational awareness and correlation with health outcomes. PMID:16751463

  12. Optimizations of packed sorbent and inlet temperature for large volume-direct aqueous injection-gas chromatography to determine high boiling volatile organic compounds in water.

    PubMed

    Yu, Bofan; Song, Yonghui; Han, Lu; Yu, Huibin; Liu, Yang; Liu, Hongliang

    2014-08-22

    For the expanded application area, fast trace analysis of certain high boiling point (i.e., 150-250 °C) volatile organic compounds (HVOCs) in water, a large volume-direct aqueous injection-gas chromatography (LV-DAI-GC) method was optimized for the following parameters: packed sorbent for sample on-line pretreatment, inlet temperature and detectors configuration. Using the composite packed sorbent self-prepared with lithium chloride and a type of diatomite, the method enabled safe injection of an approximately 50-100 μL sample at an inlet temperature of 150 °C in the splitless mode and separated HVOCs from water matrix in 2 min. Coupled with a flame ionization detector (FID), an electron capture detector (ECD) and a flame photometric detector (FPD), the method could simultaneously quantify 27 HVOCs that belong to seven subclasses (i.e., halogenated aliphatic hydrocarbons, chlorobenzenes, nitrobenzenes, anilines, phenols, polycyclic aromatic hydrocarbons and organic sulfides) in 26 min. Injecting a 50 μL sample without any enrichment step, such as cryotrap focusing, the limits of quantification (LOQs) for the 27 HVOCs was 0.01-3 μg/L. Replicate analyses of the 27 HVOCs spiked source and river water samples exhibited good precision (relative standard deviations ≤ 11.3%) and accuracy (relative errors ≤ 17.6%). The optimized LV-DAI-GC was robust and applicable for fast determination and automated continuous monitoring of HVOCs in surface water. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Characteristic study of flat spray nozzle by using particle image velocimetry (PIV) and ANSYS simulation method

    NASA Astrophysics Data System (ADS)

    Pairan, M. Rasidi; Asmuin, Norzelawati; Isa, Nurasikin Mat; Sies, Farid

    2017-04-01

    Water mist sprays are used in wide range of application. However it is depend to the spray characteristic to suit the particular application. This project studies the water droplet velocity and penetration angle generated by new development mist spray with a flat spray pattern. This research conducted into two part which are experimental and simulation section. The experimental was conducted by using particle image velocimetry (PIV) method, ANSYS software was used as tools for simulation section meanwhile image J software was used to measure the penetration angle. Three different of combination pressure of air and water were tested which are 1 bar (case A), 2 bar (case B) and 3 bar (case C). The flat spray generated by the new development nozzle was examined at 9cm vertical line from 8cm of the nozzle orifice. The result provided in the detailed analysis shows that the trend of graph velocity versus distance gives the good agreement within simulation and experiment for all the pressure combination. As the water and air pressure increased from 1 bar to 2 bar, the velocity and angle penetration also increased, however for case 3 which run under 3 bar condition, the water droplet velocity generated increased but the angle penetration is decreased. All the data then validated by calculate the error between experiment and simulation. By comparing the simulation data to the experiment data for all the cases, the standard deviation for this case A, case B and case C relatively small which are 5.444, 0.8242 and 6.4023.

  14. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong

    Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less

  15. Methods for estimating magnitude and frequency of floods in Arizona, developed with unregulated and rural peak-flow data through water year 2010

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Turney, Lovina A.; Veilleux, Andrea G.

    2014-01-01

    The regional regression equations were integrated into the U.S. Geological Survey’s StreamStats program. The StreamStats program is a national map-based web application that allows the public to easily access published flood frequency and basin characteristic statistics. The interactive web application allows a user to select a point within a watershed (gaged or ungaged) and retrieve flood-frequency estimates derived from the current regional regression equations and geographic information system data within the selected basin. StreamStats provides users with an efficient and accurate means for retrieving the most up to date flood frequency and basin characteristic data. StreamStats is intended to provide consistent statistics, minimize user error, and reduce the need for large datasets and costly geographic information system software.

  16. Application of Fourier transform infrared spectroscopy for monitoring short-chain free fatty acids in Swiss cheese.

    PubMed

    Koca, N; Rodriguez-Saona, L E; Harper, W J; Alvarez, V B

    2007-08-01

    Short-chain free fatty acids (FFA) are important sources of cheese flavor and have been reported to be indicators for assessing quality. The objective of this research was to develop a simple and rapid screening tool for monitoring the short-chain FFA contents in Swiss cheese by using Fourier transform infrared spectroscopy (FTIR). Forty-four Swiss cheese samples were evaluated by using a MIRacle three-reflection diamond attenuated total reflectance (ATR) accessory. Two different sampling techniques were used for FTIR/ATR measurement: direct measurement of Swiss cheese slices (approximately 0.5 g) and measurement of a water-soluble fraction of cheese. The amounts of FFA (propionic, acetic, and butyric acids) in the water-soluble fraction of samples were analyzed by gas chromatography-flame ion-ization detection as a reference method. Calibration models for both direct measurement and the water-soluble fraction of cheese were developed based on a cross-validated (leave-one-out approach) partial least squares regression by using the regions of 3,000 to 2,800, 1,775 to 1,680, and 1,500 to 900 cm(-1) for short-chain FFA in cheese. Promising performance statistics were obtained for the calibration models of both direct measurement and the water-soluble fraction, with improved performance statistics obtained from the water-soluble extract, particularly for propionic acid. Partial least squares models generated from FTIR/ATR spectra by direct measurement of cheeses gave standard errors of cross-validation of 9.7 mg/100 g of cheese for propionic acid, 9.3 mg/100 g of cheese for acetic acid, and 5.5 mg/100 g of cheese for butyric acid, and correlation coefficients >0.9. Standard error of cross-validation values for the water-soluble fraction were 4.4 mg/100 g of cheese for propionic acid, 9.2 mg/100 g of cheese for acetic acid, and 5.2 mg/100 g of cheese for butyric acid with correlation coefficients of 0.98, 0.95, and 0.92, respectively. Infrared spectroscopy and chemometrics accurately and precisely predicted the short-chain FFA content in Swiss cheeses and in the water-soluble fraction of the cheese.

  17. Application of empirical predictive modeling using conventional and alternative fecal indicator bacteria in eastern North Carolina waters

    USGS Publications Warehouse

    Gonzalez, Raul; Conn, Kathleen E.; Crosswell, Joey; Noble, Rachel

    2012-01-01

    Coastal and estuarine waters are the site of intense anthropogenic influence with concomitant use for recreation and seafood harvesting. Therefore, coastal and estuarine water quality has a direct impact on human health. In eastern North Carolina (NC) there are over 240 recreational and 1025 shellfish harvesting water quality monitoring sites that are regularly assessed. Because of the large number of sites, sampling frequency is often only on a weekly basis. This frequency, along with an 18–24 h incubation time for fecal indicator bacteria (FIB) enumeration via culture-based methods, reduces the efficiency of the public notification process. In states like NC where beach monitoring resources are limited but historical data are plentiful, predictive models may offer an improvement for monitoring and notification by providing real-time FIB estimates. In this study, water samples were collected during 12 dry (n = 88) and 13 wet (n = 66) weather events at up to 10 sites. Statistical predictive models for Escherichiacoli (EC), enterococci (ENT), and members of the Bacteroidales group were created and subsequently validated. Our results showed that models for EC and ENT (adjusted R2 were 0.61 and 0.64, respectively) incorporated a range of antecedent rainfall, climate, and environmental variables. The most important variables for EC and ENT models were 5-day antecedent rainfall, dissolved oxygen, and salinity. These models successfully predicted FIB levels over a wide range of conditions with a 3% (EC model) and 9% (ENT model) overall error rate for recreational threshold values and a 0% (EC model) overall error rate for shellfish threshold values. Though modeling of members of the Bacteroidales group had less predictive ability (adjusted R2 were 0.56 and 0.53 for fecal Bacteroides spp. and human Bacteroides spp., respectively), the modeling approach and testing provided information on Bacteroidales ecology. This is the first example of a set of successful statistical predictive models appropriate for assessment of both recreational and shellfish harvesting water quality in estuarine waters.

  18. Impact of Exposure Uncertainty on the Association between Perfluorooctanoate and Preeclampsia in the C8 Health Project Population.

    PubMed

    Avanasi, Raghavendhran; Shin, Hyeong-Moo; Vieira, Verónica M; Savitz, David A; Bartell, Scott M

    2016-01-01

    Uncertainty in exposure estimates from models can result in exposure measurement error and can potentially affect the validity of epidemiological studies. We recently used a suite of environmental models and an integrated exposure and pharmacokinetic model to estimate individual perfluorooctanoate (PFOA) serum concentrations and assess the association with preeclampsia from 1990 through 2006 for the C8 Health Project participants. The aims of the current study are to evaluate impact of uncertainty in estimated PFOA drinking-water concentrations on estimated serum concentrations and their reported epidemiological association with preeclampsia. For each individual public water district, we used Monte Carlo simulations to vary the year-by-year PFOA drinking-water concentration by randomly sampling from lognormal distributions for random error in the yearly public water district PFOA concentrations, systematic error specific to each water district, and global systematic error in the release assessment (using the estimated concentrations from the original fate and transport model as medians and a range of 2-, 5-, and 10-fold uncertainty). Uncertainty in PFOA water concentrations could cause major changes in estimated serum PFOA concentrations among participants. However, there is relatively little impact on the resulting epidemiological association in our simulations. The contribution of exposure uncertainty to the total uncertainty (including regression parameter variance) ranged from 5% to 31%, and bias was negligible. We found that correlated exposure uncertainty can substantially change estimated PFOA serum concentrations, but results in only minor impacts on the epidemiological association between PFOA and preeclampsia. Avanasi R, Shin HM, Vieira VM, Savitz DA, Bartell SM. 2016. Impact of exposure uncertainty on the association between perfluorooctanoate and preeclampsia in the C8 Health Project population. Environ Health Perspect 124:126-132; http://dx.doi.org/10.1289/ehp.1409044.

  19. A new moisture tagging capability in the Weather Research and Forecasting model: formulation, validation and application to the 2014 Great Lake-effect snowstorm

    NASA Astrophysics Data System (ADS)

    Insua-Costa, Damián; Miguez-Macho, Gonzalo

    2018-02-01

    A new moisture tagging tool, usually known as water vapor tracer (WVT) method or online Eulerian method, has been implemented into the Weather Research and Forecasting (WRF) regional meteorological model, enabling it for precise studies on atmospheric moisture sources and pathways. We present here the method and its formulation, along with details of the implementation into WRF. We perform an in-depth validation with a 1-month long simulation over North America at 20 km resolution, tagging all possible moisture sources: lateral boundaries, continental, maritime or lake surfaces and initial atmospheric conditions. We estimate errors as the moisture or precipitation amounts that cannot be traced back to any source. Validation results indicate that the method exhibits high precision, with errors considerably lower than 1 % during the entire simulation period, for both precipitation and total precipitable water. We apply the method to the Great Lake-effect snowstorm of November 2014, aiming at quantifying the contribution of lake evaporation to the large snow accumulations observed in the event. We perform simulations in a nested domain at 5 km resolution with the tagging technique, demonstrating that about 30-50 % of precipitation in the regions immediately downwind, originated from evaporated moisture in the Great Lakes. This contribution increases to between 50 and 60 % of the snow water equivalent in the most severely affected areas, which suggests that evaporative fluxes from the lakes have a fundamental role in producing the most extreme accumulations in these episodes, resulting in the highest socioeconomic impacts.

  20. An Improved GRACE Terrestrial Water Storage Assimilation System For Estimating Large-Scale Soil Moisture and Shallow Groundwater

    NASA Astrophysics Data System (ADS)

    Girotto, M.; De Lannoy, G. J. M.; Reichle, R. H.; Rodell, M.

    2015-12-01

    The Gravity Recovery And Climate Experiment (GRACE) mission is unique because it provides highly accurate column integrated estimates of terrestrial water storage (TWS) variations. Major limitations of GRACE-based TWS observations are related to their monthly temporal and coarse spatial resolution (around 330 km at the equator), and to the vertical integration of the water storage components. These challenges can be addressed through data assimilation. To date, it is still not obvious how best to assimilate GRACE-TWS observations into a land surface model, in order to improve hydrological variables, and many details have yet to be worked out. This presentation discusses specific recent features of the assimilation of gridded GRACE-TWS data into the NASA Goddard Earth Observing System (GEOS-5) Catchment land surface model to improve soil moisture and shallow groundwater estimates at the continental scale. The major recent advancements introduced by the presented work with respect to earlier systems include: 1) the assimilation of gridded GRACE-TWS data product with scaling factors that are specifically derived for data assimilation purposes only; 2) the assimilation is performed through a 3D assimilation scheme, in which reasonable spatial and temporal error standard deviations and correlations are exploited; 3) the analysis step uses an optimized calculation and application of the analysis increments; 4) a poor-man's adaptive estimation of a spatially variable measurement error. This work shows that even if they are characterized by a coarse spatial and temporal resolution, the observed column integrated GRACE-TWS data have potential for improving our understanding of soil moisture and shallow groundwater variations.

  1. Quantitative retrieval of aerosol optical properties by means of ceilometers

    NASA Astrophysics Data System (ADS)

    Wiegner, Matthias; Gasteiger, Josef; Geiß, Alexander

    2016-04-01

    In the last few years extended networks of ceilometers have been established by several national weather services. Based on improvements of the hardware performance of these single-wavelength backscatter lidars and their 24/7 availability they are increasingly used to monitor mixing layer heights and to derive profiles of the particle backscatter profile. As a consequence they are used for a wide range of applications including the dispersion of volcanic ash plumes, validation of chemistry transport models and air quality studies. In this context the development of automated schemes to detect aerosol layers and to identify the mixing layer are essential, in particular as the latter is often used as a proxy for air quality. Of equal importance is the calibration of ceilometer signals as a pre-requisite to derive quantitative optical properties. Recently, it has been emphasized that the majority of ceilometers are influenced by water vapor absorption as they operate in the spectral range of 905 - 910 nm. If this effect is ignored, errors of the aerosol backscatter coefficient can be as large as 50%, depending on the atmospheric water vapor content and the emitted wavelength spectrum. As a consequence, any other derived quantity, e.g. the extinction coefficient or mass concentration, would suffer from a significant uncertainty in addition to the inherent errors of the inversion of the lidar equation itself. This can be crucial when ceilometer derived profiles shall be used to validate transport models. In this presentation, the methodology proposed by Wiegner and Gasteiger (2015) to correct for water vapor absorption is introduced and discussed.

  2. Performance limitations of temperature-emissivity separation techniques in long-wave infrared hyperspectral imaging applications

    NASA Astrophysics Data System (ADS)

    Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew

    2017-08-01

    Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.

  3. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  4. Quality Control Analysis of Selected Aspects of Programs Administered by the Bureau of Student Financial Assistance. Task 1 and Quality Control Sample; Error-Prone Modeling Analysis Plan.

    ERIC Educational Resources Information Center

    Saavedra, Pedro; And Others

    Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…

  5. Fault-Tolerant Signal Processing Architectures with Distributed Error Control.

    DTIC Science & Technology

    1985-01-01

    Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and

  6. Optical and Ancillary Measurements at High Latitudes in Support of the MODIS Ocean Validation Program

    NASA Technical Reports Server (NTRS)

    Stramski, Dariusz; Stramska, Malgorzata; Starr, David OC. (Technical Monitor)

    2002-01-01

    The overall goal of this project was to validate and refine ocean color algorithms at high latitudes in the north polar region of the Atlantic. The specific objectives were defined as follows: (1) to identify and quantify errors in the satellite-derived water-leaving radiances and chlorophyll concentration; (2) to develop understanding of these errors; and (3) to improve in-water ocean color algorithms for retrieving chlorophyll concentration in the investigated region.

  7. Filtering Drifter Trajectories Sampled at Submesoscale Resolution

    DTIC Science & Technology

    2015-07-10

    interval 5 min and a positioning error 1.5 m, the acceleration error is 4 10 m/s , a value comparable with the typical Coriolis acceleration of a water...10 ms , corresponding to the Coriolis acceleration experi- enced by a water parcel traveling at a speed of 2.2 m/s. This value corresponds to the...computed by integrating the NCOM velocity field contaminated by a random walk process whose effective dispersion coefficient (150 m /s) was specified as the

  8. Maps and grids of hydrogeologic information created from standardized water-well drillers’ records of the glaciated United States

    USGS Publications Warehouse

    Bayless, E. Randall; Arihood, Leslie D.; Reeves, Howard W.; Sperl, Benjamin J.S.; Qi, Sharon L.; Stipe, Valerie E.; Bunch, Aubrey R.

    2017-01-18

    As part of the National Water Availability and Use Program established by the U.S. Geological Survey (USGS) in 2005, this study took advantage of about 14 million records from State-managed collections of water-well drillers’ records and created a database of hydrogeologic properties for the glaciated United States. The water-well drillers’ records were standardized to be relatively complete and error-free and to provide consistent variables and naming conventions that span all State boundaries.Maps and geospatial grids were developed for (1) total thickness of glacial deposits, (2) total thickness of coarse-grained deposits, (3) specific-capacity based transmissivity and hydraulic conductivity, and (4) texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity. The information included in these maps and grids is required for most assessments of groundwater availability, in addition to having applications to studies of groundwater flow and transport. The texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity were based on an assumed range of hydraulic conductivity values for coarse- and fine-grained deposits and should only be used with complete awareness of the methods used to create them. However, the maps and grids of texture-based estimated equivalent hydraulic conductivity and transmissivity may be useful for application to areas where a range of measured values is available for re-scaling.Maps of hydrogeologic information for some States are presented as examples in this report but maps and grids for all States are available electronically at the project Web site (USGS Glacial Aquifer System Groundwater Availability Study, http://mi.water.usgs.gov/projects/WaterSmart/Map-SIR2015-5105.html) and the Science Base Web site, https://www.sciencebase.gov/catalog/item/58756c7ee4b0a829a3276352.

  9. Estimation of the discharges of the multiple water level stations by multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi

    2016-04-01

    This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.

  10. Watershed reliability, resilience and vulnerability analysis under uncertainty using water quality data.

    PubMed

    Hoque, Yamen M; Tripathi, Shivam; Hantush, Mohamed M; Govindaraju, Rao S

    2012-10-30

    A method for assessment of watershed health is developed by employing measures of reliability, resilience and vulnerability (R-R-V) using stream water quality data. Observed water quality data are usually sparse, so that a water quality time-series is often reconstructed using surrogate variables (streamflow). A Bayesian algorithm based on relevance vector machine (RVM) was employed to quantify the error in the reconstructed series, and a probabilistic assessment of watershed status was conducted based on established thresholds for various constituents. As an application example, observed water quality data for several constituents at different monitoring points within the Cedar Creek watershed in north-east Indiana (USA) were utilized. Considering uncertainty in the data for the period 2002-2007, the R-R-V analysis revealed that the Cedar Creek watershed tends to be in compliance with respect to selected pesticides, ammonia and total phosphorus. However, the watershed was found to be prone to violations of sediment standards. Ignoring uncertainty in the water quality time-series led to misleading results especially in the case of sediments. Results indicate that the methods presented in this study may be used for assessing the effects of different stressors over a watershed. The method shows promise as a management tool for assessing watershed health. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Estimation of Scale Deposition in the Water Walls of an Operating Indian Coal Fired Boiler: Predictive Modeling Approach Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Kumari, Amrita; Das, Suchandan Kumar; Srivastava, Prem Kumar

    2016-04-01

    Application of computational intelligence for predicting industrial processes has been in extensive use in various industrial sectors including power sector industry. An ANN model using multi-layer perceptron philosophy has been proposed in this paper to predict the deposition behaviors of oxide scale on waterwall tubes of a coal fired boiler. The input parameters comprises of boiler water chemistry and associated operating parameters, such as, pH, alkalinity, total dissolved solids, specific conductivity, iron and dissolved oxygen concentration of the feed water and local heat flux on boiler tube. An efficient gradient based network optimization algorithm has been employed to minimize neural predictions errors. Effects of heat flux, iron content, pH and the concentrations of total dissolved solids in feed water and other operating variables on the scale deposition behavior have been studied. It has been observed that heat flux, iron content and pH of the feed water have a relatively prime influence on the rate of oxide scale deposition in water walls of an Indian boiler. Reasonably good agreement between ANN model predictions and the measured values of oxide scale deposition rate has been observed which is corroborated by the regression fit between these values.

  12. Pedotransfer functions to estimate soil water content at field capacity and permanent wilting point in hot Arid Western India

    NASA Astrophysics Data System (ADS)

    Santra, Priyabrata; Kumar, Mahesh; Kumawat, R. N.; Painuli, D. K.; Hati, K. M.; Heuvelink, G. B. M.; Batjes, N. H.

    2018-04-01

    Characterization of soil water retention, e.g., water content at field capacity (FC) and permanent wilting point (PWP) over a landscape plays a key role in efficient utilization of available scarce water resources in dry land agriculture; however, direct measurement thereof for multiple locations in the field is not always feasible. Therefore, pedotransfer functions (PTFs) were developed to estimate soil water retention at FC and PWP for dryland soils of India. A soil database available for Arid Western India ( N=370) was used to develop PTFs. The developed PTFs were tested in two independent datasets from arid regions of India ( N=36) and an arid region of USA ( N=1789). While testing these PTFs using independent data from India, root mean square error (RMSE) was found to be 2.65 and 1.08 for FC and PWP, respectively, whereas for most of the tested `established' PTFs, the RMSE was >3.41 and >1.15, respectively. Performance of the developed PTFs from the independent dataset from USA was comparable with estimates derived from `established' PTFs. For wide applicability of the developed PTFs, a user-friendly soil moisture calculator was developed. The PTFs developed in this study may be quite useful to farmers for scheduling irrigation water as per soil type.

  13. Generic Protocol for the Verification of Ballast Water Treatment Technology. Version 5.1

    DTIC Science & Technology

    2010-09-01

    the Protocol ..................................................................................... 2 1.4 Verification Testing Process ...Volumes, Containers and Processing .................................................................38 Table 10. Recommendation for Water...or persistent distortion of a measurement process that causes errors in one direction. Challenge Water: Water supplied to a treatment system under

  14. Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system

    USGS Publications Warehouse

    Stewart, M.; Langevin, C.

    1999-01-01

    A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield-scale effects of pumping, using a 75 day long simulation without recharge to predict the long-term behavior of the wellfield was an inappropriate application, resulting in significant underprediction of wellfield effects.A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1??105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent stead

  15. Failure mode and effective analysis ameliorate awareness of medical errors: a 4-year prospective observational study in critically ill children.

    PubMed

    Daverio, Marco; Fino, Giuliana; Luca, Brugnaro; Zaggia, Cristina; Pettenazzo, Andrea; Parpaiola, Antonella; Lago, Paola; Amigoni, Angela

    2015-12-01

    Errors in are estimated to occur with an incidence of 3.7-16.6% in hospitalized patients. The application of systems for detection of adverse events is becoming a widespread reality in healthcare. Incident reporting (IR) and failure mode and effective analysis (FMEA) are strategies widely used to detect errors, but no studies have combined them in the setting of a pediatric intensive care unit (PICU). The aim of our study was to describe the trend of IR in a PICU and evaluate the effect of FMEA application on the number and severity of the errors detected. With this prospective observational study, we evaluated the frequency IR documented in standard IR forms completed from January 2009 to December 2012 in the PICU of Woman's and Child's Health Department of Padova. On the basis of their severity, errors were classified as: without outcome (55%), with minor outcome (16%), with moderate outcome (10%), and with major outcome (3%); 16% of reported incidents were 'near misses'. We compared the data before and after the introduction of FMEA. Sixty-nine errors were registered, 59 (86%) concerning drug therapy (83% during prescription). Compared to 2009-2010, in 2011-2012, we noted an increase of reported errors (43 vs 26) with a reduction of their severity (21% vs 8% 'near misses' and 65% vs 38% errors with no outcome). With the introduction of FMEA, we obtained an increased awareness in error reporting. Application of these systems will improve the quality of healthcare services. © 2015 John Wiley & Sons Ltd.

  16. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    USGS Publications Warehouse

    Topping, David J.; Wright, Scott A.

    2016-05-04

    It is commonly recognized that suspended-sediment concentrations in rivers can change rapidly in time and independently of water discharge during important sediment‑transporting events (for example, during floods); thus, suspended-sediment measurements at closely spaced time intervals are necessary to characterize suspended‑sediment loads. Because the manual collection of sufficient numbers of suspended-sediment samples required to characterize this variability is often time and cost prohibitive, several “surrogate” techniques have been developed for in situ measurements of properties related to suspended-sediment characteristics (for example, turbidity, laser-diffraction, acoustics). Herein, we present a new physically based method for the simultaneous measurement of suspended-silt-and-clay concentration, suspended-sand concentration, and suspended‑sand median grain size in rivers, using multi‑frequency arrays of single-frequency side‑looking acoustic-Doppler profilers. The method is strongly grounded in the extensive scientific literature on the incoherent scattering of sound by random suspensions of small particles. In particular, the method takes advantage of theory that relates acoustic frequency, acoustic attenuation, acoustic backscatter, suspended-sediment concentration, and suspended-sediment grain-size distribution. We develop the theory and methods, and demonstrate the application of the method at six study sites on the Colorado River and Rio Grande, where large numbers of suspended-sediment samples have been collected concurrently with acoustic attenuation and backscatter measurements over many years. The method produces acoustical measurements of suspended-silt-and-clay and suspended-sand concentration (in units of mg/L), and acoustical measurements of suspended-sand median grain size (in units of mm) that are generally in good to excellent agreement with concurrent physical measurements of these quantities in the river cross sections at these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  17. Transforming Surface Water Hydrology Through SWOT Altimetry

    NASA Astrophysics Data System (ADS)

    Alsdorf, Douglas; Mognard, Nelly; Rodriguez, Ernesto

    2013-09-01

    SWOT will measure water surface elevations across rivers, lakes, wetlands, and reservoirs with a 120km wide swath using decimeter-scale pixels having centimetric-scale height accuracies. Nothing like this "water surface topography" has been collected on a consistent basis from any method. Thus, SWOT will provide a transformative measurement for global hydrology. Storage change measurements from SWOT are expected to have an error of 10% or better for 250m2 and larger water bodies. Discharge estimation is complicated by the lack of channel bathymetric knowledge. Nevertheless, two model-based studies of the Ohio River suggest SWOT discharge errors will be 10%. Important questions will be addressed via SWOT measurements, e.g., (1) What is the water balance of the Congo Basin and indeed of any basin? (2) Where does a wetland receive its water: from upland runoff or from an adjacent river? (3) What are the implications for transboundary rivers?

  18. Differential absorption lidar observation on small-time-scale features of water vapor in the atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Kong, Wei; Li, Jiatang; Liu, Hao; Chen, Tao; Hong, Guanglie; Shu, Rong

    2017-11-01

    Observation on small-time-scale features of water vapor density is essential for turbulence, convection and many other fast atmospheric processes study. For the high signal-to-noise signal of elastic signal acquired by differential absorption lidar, it has great potential for all-day water vapor turbulence observation. This paper presents a set of differential absorption lidar at 935nm developed by Shanghai Institute of Technical Physics of the Chinese Academy of Science for water vapor turbulence observation. A case at the midday is presented to demonstrate the daytime observation ability of this system. "Autocovariance method" is used to separate the contribution of water vapor fluctuation from random error. The results show that the relative error is less than 10% at temporal and spatial resolution of 10 seconds and 60 meters in the ABL. This indicate that the system has excellent performance for daytime water vapor turbulence observation.

  19. Swimming and other activities: applied aspects of fish swimming performance

    USGS Publications Warehouse

    Castro-Santos, Theodore R.; Farrell, A.P.

    2011-01-01

    Human activities such as hydropower development, water withdrawals, and commercial fisheries often put fish species at risk. Engineered solutions designed to protect species or their life stages are frequently based on assumptions about swimming performance and behaviors. In many cases, however, the appropriate data to support these designs are either unavailable or misapplied. This article provides an overview of the state of knowledge of fish swimming performance – where the data come from and how they are applied – identifying both gaps in knowledge and common errors in application, with guidance on how to avoid repeating mistakes, as well as suggestions for further study.

  20. Application of Ensemble Kalman Filter in Power System State Tracking and Sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yulan; Huang, Zhenyu; Zhou, Ning

    2012-05-01

    Ensemble Kalman Filter (EnKF) is proposed to track dynamic states of generators. The algorithm of EnKF and its application to generator state tracking are presented in detail. The accuracy and sensitivity of the method are analyzed with respect to initial state errors, measurement noise, unknown fault locations, time steps and parameter errors. It is demonstrated through simulation studies that even with some errors in the parameters, the developed EnKF can effectively track generator dynamic states using disturbance data.

  1. New Methods for Improved Double Circular-Arc Helical Gears

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Lu, Jian

    1997-01-01

    The authors have extended the application of double circular-arc helical gears for internal gear drives. The geometry of the pinion and gear tooth surfaces has been determined. The influence of errors of alignment on the transmission errors and the shift of the bearing contact have been investigated. Application of a predesigned parabolic function for the reduction of transmission errors was proposed. Methods of grinding of the pinion-gear tooth surfaces by a disk-shaped tool and a grinding worm were proposed.

  2. Benchmarking observational uncertainties for hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.

    2013-12-01

    There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.

  3. Two errors in enteric epidemiology: the stories of Austin Flint and Max von Pettenkofer.

    PubMed

    Evans, A S

    1985-01-01

    The misconceptions of two physicians, Austin Flint and Max von Pettenkofer, in interpreting epidemiologic data on the water transmission of enteric disease are reviewed. Austin Flint failed to recognize the transmission of typhoid fever from well water in an epidemic he investigated in North Boston, New York, in 1843. He later discovered and freely admitted his error. Max von Pettenkofer, who had studied cholera in the 1854 outbreak and in many subsequent outbreaks, failed to confirm John Snow's observations in England on the water transmission of cholera. Pettenkofer eventually swallowed live cholera bacilli and did not develop cholera. He remained convinced to the end of his life that cholera is not directly transmitted by drinking water.

  4. A numerical model for water and heat transport in freezing soils with nonequilibrium ice-water interfaces

    NASA Astrophysics Data System (ADS)

    Peng, Zhenyang; Tian, Fuqiang; Wu, Jingwei; Huang, Jiesheng; Hu, Hongchang; Darnault, Christophe J. G.

    2016-09-01

    A one-dimensional numerical model of heat and water transport in freezing soils is developed by assuming that ice-water interfaces are not necessarily in equilibrium. The Clapeyron equation, which is derived from a static ice-water interface using the thermal equilibrium theory, cannot be readily applied to a dynamic system, such as freezing soils. Therefore, we handled the redistribution of liquid water with the Richard's equation. In this application, the sink term is replaced by the freezing rate of pore water, which is proportional to the extent of supercooling and available water content for freezing by a coefficient, β. Three short-term laboratory column simulations show reasonable agreement with observations, with standard error of simulation on water content ranging between 0.007 and 0.011 cm3 cm-3, showing improved accuracy over other models that assume equilibrium ice-water interfaces. Simulation results suggest that when the freezing front is fixed at a specific depth, deviation of the ice-water interface from equilibrium, at this location, will increase with time. However, this deviation tends to weaken when the freezing front slowly penetrates to a greater depth, accompanied with thinner soils of significant deviation. The coefficient, β, plays an important role in the simulation of heat and water transport. A smaller β results in a larger deviation in the ice-water interface from equilibrium, and backward estimation of the freezing front. It also leads to an underestimation of water content in soils that were previously frozen by a rapid freezing rate, and an overestimation of water content in the rest of the soils.

  5. Rainfall-Runoff and Water-Balance Models for Management of the Fena Valley Reservoir, Guam

    USGS Publications Warehouse

    Yeung, Chiu W.

    2005-01-01

    The U.S. Geological Survey's Precipitation-Runoff Modeling System (PRMS) and a generalized water-balance model were calibrated and verified for use in estimating future availability of water in the Fena Valley Reservoir in response to various combinations of water withdrawal rates and rainfall conditions. Application of PRMS provides a physically based method for estimating runoff from the Fena Valley Watershed during the annual dry season, which extends from January through May. Runoff estimates from the PRMS are used as input to the water-balance model to estimate change in water levels and storage in the reservoir. A previously published model was calibrated for the Maulap and Imong River watersheds using rainfall data collected outside of the watershed. That model was applied to the Almagosa River watershed by transferring calibrated parameters and coefficients because information on daily diversions at the Almagosa Springs upstream of the gaging station was not available at the time. Runoff from the ungaged land area was not modeled. For this study, the availability of Almagosa Springs diversion data allowed the calibration of PRMS for the Almagosa River watershed. Rainfall data collected at the Almagosa rain gage since 1992 also provided better estimates of rainfall distribution in the watershed. In addition, the discontinuation of pan-evaporation data collection in 1998 required a change in the evapotranspiration estimation method used in the PRMS model. These reasons prompted the update of the PRMS for the Fena Valley Watershed. Simulated runoff volume from the PRMS compared reasonably with measured values for gaging stations on Maulap, Almagosa, and Imong Rivers, tributaries to the Fena Valley Reservoir. On the basis of monthly runoff simulation for the dry seasons included in the entire simulation period (1992-2001), the total volume of runoff can be predicted within -3.66 percent at Maulap River, within 5.37 percent at Almagosa River, and within 10.74 percent at Imong River. Month-end reservoir volumes simulated by the reservoir water-balance model for both calibration and verification periods compared closely with measured reservoir volumes. Errors for the calibration periods ranged from 4.51 percent [208.7 acre-feet (acre-ft) or 68.0 million gallons (Mgal)] to -5.90 percent (-317.8 acre-ft or -103.6 Mgal). For the verification periods, errors ranged from 1.69 percent (103.5 acre-ft or 33.7 Mgal) to -4.60 percent (-178.7 acre-ft or -58.2 Mgal). Monthly simulation bias ranged from -0.19 percent for the calibration period to -0.98 percent for the verification period; relative error ranged from -0.37 to -1.12 percent, respectively. Relatively small bias indicated that the model did not consistently overestimate or underestimate reservoir volume.

  6. Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality

    USGS Publications Warehouse

    Gaeuman, David; Jacobson, Robert B.

    2005-01-01

    When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.

  7. Single Versus Multiple Events Error Potential Detection in a BCI-Controlled Car Game With Continuous and Discrete Feedback.

    PubMed

    Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R

    2016-03-01

    This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.

  8. Validation on MERSI/FY-3A precipitable water vapor product

    NASA Astrophysics Data System (ADS)

    Gong, Shaoqi; Fiifi Hagan, Daniel; Lu, Jing; Wang, Guojie

    2018-01-01

    The precipitable water vapor is one of the most active gases in the atmosphere which strongly affects the climate. China's second-generation polar orbit meteorological satellite FY-3A equipped with a Medium Resolution Spectral Imager (MERSI) is able to detect atmospheric water vapor. In this paper, water vapor data from AERONET, radiosonde and MODIS were used to validate the accuracy of the MERSI water vapor product in the different seasons and climatic regions of East Asia. The results show that the values of MERSI water vapor product are relatively lower than that of the other instruments and its accuracy is generally lower. The mean bias (MB) was -0.8 to -12.7 mm, the root mean square error (RMSE) was 2.2-17.0 mm, and the mean absolute percentage error (MAPE) varied from 31.8% to 44.1%. On the spatial variation, the accuracy of MERSI water vapor product in a descending order was from North China, West China, Japan -Korea, East China, to South China, while the seasonal variation of accuracy was the best for winter, followed by spring, then in autumn and the lowest in summer. It was found that the errors of MERSI water vapor product was mainly due to the low accuracy of radiation calibration of the MERSI absorption channel, along with the inaccurate look-up table of apparent reflectance and water vapor within the water vapor retrieved algorithm. In addition, the surface reflectance, the mixed pixels of image cloud, the humidity and temperature of atmospheric vertical profile and the haze were also found to have affected the accuracy of MERSI water vapor product.

  9. Methods for estimating the magnitude and frequency of peak streamflows at ungaged sites in and near the Oklahoma Panhandle

    USGS Publications Warehouse

    Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.

    2015-09-28

    Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.

  10. Iterative Frequency Domain Decision Feedback Equalization and Decoding for Underwater Acoustic Communications

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Ge, Jian-Hua

    2012-12-01

    Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.

  11. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    NASA Technical Reports Server (NTRS)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  12. Error Propagation in a System Model

    NASA Technical Reports Server (NTRS)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  13. Method and apparatus for faulty memory utilization

    DOEpatents

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2016-04-19

    A method for faulty memory utilization in a memory system includes: obtaining information regarding memory health status of at least one memory page in the memory system; determining an error tolerance of the memory page when the information regarding memory health status indicates that a failure is predicted to occur in an area of the memory system affecting the memory page; initiating a migration of data stored in the memory page when it is determined that the data stored in the memory page is non-error-tolerant; notifying at least one application regarding a predicted operating system failure and/or a predicted application failure when it is determined that data stored in the memory page is non-error-tolerant and cannot be migrated; and notifying at least one application regarding the memory failure predicted to occur when it is determined that data stored in the memory page is error-tolerant.

  14. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  15. Application of database methods to the prediction of B3LYP-optimized polyhedral water cluster geometries and electronic energies

    NASA Astrophysics Data System (ADS)

    Anick, David J.

    2003-12-01

    A method is described for a rapid prediction of B3LYP-optimized geometries for polyhedral water clusters (PWCs). Starting with a database of 121 B3LYP-optimized PWCs containing 2277 H-bonds, linear regressions yield formulas correlating O-O distances, O-O-O angles, and H-O-H orientation parameters, with local and global cluster descriptors. The formulas predict O-O distances with a rms error of 0.85 pm to 1.29 pm and predict O-O-O angles with a rms error of 0.6° to 2.2°. An algorithm is given which uses the O-O and O-O-O formulas to determine coordinates for the oxygen nuclei of a PWC. The H-O-H formulas then determine positions for two H's at each O. For 15 test clusters, the gap between the electronic energy of the predicted geometry and the true B3LYP optimum ranges from 0.11 to 0.54 kcal/mol or 4 to 18 cal/mol per H-bond. Linear regression also identifies 14 parameters that strongly correlate with PWC electronic energy. These descriptors include the number of H-bonds in which both oxygens carry a non-H-bonding H, the number of quadrilateral faces, the number of symmetric angles in 5- and in 6-sided faces, and the square of the cluster's estimated dipole moment.

  16. Application of MUSLE for the prediction of phosphorus losses.

    PubMed

    Noor, Hamze; Mirnia, Seyed Khalagh; Fazli, Somaye; Raisi, Mohamad Bagher; Vafakhah, Mahdi

    2010-01-01

    Soil erosion in forestlands affects not only land productivity but also the water body down stream. The Universal Soil Loss Equation (USLE) has been applied broadly for the prediction of soil loss from upland fields. However, there are few reports concerning the prediction of nutrient (P) losses based on the USLE and its versions. The present study was conducted to evaluate the applicability of the deterministic model Modified Universal Soil Loss Equation (MUSLE) to estimation of phosphorus losses in the Kojor forest watershed, northern Iran. The model was tested and calibrated using accurate continuous P loss data collected during seven storm events in 2008. Results of the original model simulations for storm-wise P loss did not match the observed data, while the revised version of the model could imitate the observed values well. The results of the study approved the efficient application of the revised MUSLE in estimating storm-wise P losses in the study area with a high level of agreement of beyond 93%, an acceptable estimation error of some 35%.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  18. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  19. A new mechatronic assistance system for the neurosurgical operating theatre: implementation, assessment of accuracy and application concepts.

    PubMed

    Rachinger, Jens; Bumm, Klaus; Wurm, Jochen; Bohr, Christopher; Nissen, Urs; Dannenmann, Tim; Buchfelder, Michael; Iro, Heinrich; Nimsky, Christopher

    2007-01-01

    To introduce a new robotic system to the field of neurosurgery and report on a preliminary assessment of accuracy as well as on envisioned application concepts. Based on experience with another system (Evolution 1, URS Inc., Schwerin, Germany), technical advancements are discussed. The basic module is an industrial 6 degrees of freedom robotic arm with a modified control element. The system combines frameless stereotaxy, robotics, and endoscopy. The robotic reproducibility error and the overall error were evaluated. For accuracy testing CT markers were placed on a cadaveric head and pinpointed with the robot's tool tip, both fully automated and telemanipulatory. Applicability in a clinical setting, user friendliness, safety and flexibility were assessed. The new system is suitable for use in the neurosurgical operating theatre. Hard- and software are user-friendly and flexible. The mean reproducibility error was 0.052-0.062 mm, the mean overall error was 0.816 mm. The system is less cumbersome and much easier to use than the Evolution 1. With its user-friendly interface and reliable safety features, its high application accuracy and flexibility, the new system is a versatile robotic platform for various neurosurgical applications. Adaptations for different applications are currently being realized. Copyright (c) 2007 S. Karger AG, Basel.

  20. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. Application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.

  1. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.

  2. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  3. Synchronizing Two AGCMs via Ocean-Atmosphere Coupling (Invited)

    NASA Astrophysics Data System (ADS)

    Kirtman, B. P.

    2009-12-01

    A new approach for fusing or synchronizing to very different Atmospheric General Circulation Models (AGCMs) is described. The approach is also well suited for understand why two different coupled models have such large differences in their respective climate simulations. In the application presented here, the differences between the coupled models using the Center for Ocean-Land-Atmosphere Studies (COLA) and the National Center for Atmospheric Research (NCAR) atmospheric general circulation models (AGCMs) are examined. The intent is to isolate which component of the air-sea fluxes is most responsible for the differences between the coupled models and for the errors in their respective coupled simulations. The procedure is to simultaneously couple the two different atmospheric component models to a single ocean general circulation model (OGCM), in this case the Modular Ocean Model (MOM) developed at the Geophysical Fluid Dynamics Laboratory (GFDL). Each atmospheric component model experiences the same SST produced by the OGCM, but the OGCM is simultaneously coupled to both AGCMs using a cross coupling strategy. In the first experiment, the OGCM is coupled to the heat and fresh water flux from the NCAR AGCM (Community Atmospheric Model; CAM) and the momentum flux from the COLA AGCM. Both AGCMs feel the same SST. In the second experiment, the OGCM is coupled to the heat and fresh water flux from the COLA AGCM and the momentum flux from the CAM AGCM. Again, both atmospheric component models experience the same SST. By comparing these two experimental simulations with control simulations where only one AGCM is used, it is possible to argue which of the flux components are most responsible for the differences in the simulations and their respective errors. Based on these sensitivity experiments we conclude that the tropical ocean warm bias in the COLA coupled model is due to errors in the heat flux, and that the erroneous westward shift in the tropical Pacific cold tongue minimum in the NCAR model is due errors in the momentum flux. All the coupled simulations presented here have warm biases along the eastern boundary of the tropical oceans suggesting that the problem is common to both AGCMs. In terms of interannual variability in the tropical Pacific, the CAM momentum flux is responsible for the erroneous westward extension of the sea surface temperature anomalies (SSTA) and errors in the COLA momentum flux cause the erroneous eastward migration of the El Niño-Southern Oscillation (ENSO) events. These conclusions depend on assuming that the error due to the OGCM can be neglected.

  4. A New Approach for Coupled GCM Sensitivity Studies

    NASA Astrophysics Data System (ADS)

    Kirtman, B. P.; Duane, G. S.

    2011-12-01

    A new multi-model approach for coupled GCM sensitivity studies is presented. The purpose of the sensitivity experiments is to understand why two different coupled models have such large differences in their respective climate simulations. In the application presented here, the differences between the coupled models using the Center for Ocean-Land-Atmosphere Studies (COLA) and the National Center for Atmospheric Research (NCAR) atmospheric general circulation models (AGCMs) are examined. The intent is to isolate which component of the air-sea fluxes is most responsible for the differences between the coupled models and for the errors in their respective coupled simulations. The procedure is to simultaneously couple the two different atmospheric component models to a single ocean general circulation model (OGCM), in this case the Modular Ocean Model (MOM) developed at the Geophysical Fluid Dynamics Laboratory (GFDL). Each atmospheric component model experiences the same SST produced by the OGCM, but the OGCM is simultaneously coupled to both AGCMs using a cross coupling strategy. In the first experiment, the OGCM is coupled to the heat and fresh water flux from the NCAR AGCM (Community Atmospheric Model; CAM) and the momentum flux from the COLA AGCM. Both AGCMs feel the same SST. In the second experiment, the OGCM is coupled to the heat and fresh water flux from the COLA AGCM and the momentum flux from the CAM AGCM. Again, both atmospheric component models experience the same SST. By comparing these two experimental simulations with control simulations where only one AGCM is used, it is possible to argue which of the flux components are most responsible for the differences in the simulations and their respective errors. Based on these sensitivity experiments we conclude that the tropical ocean warm bias in the COLA coupled model is due to errors in the heat flux, and that the erroneous westward shift in the tropical Pacific cold tongue minimum in the NCAR model is due errors in the momentum flux. All the coupled simulations presented here have warm biases along the eastern boundary of the tropical oceans suggesting that the problem is common to both AGCMs. In terms of interannual variability in the tropical Pacific, the CAM momentum flux is responsible for the erroneous westward extension of the sea surface temperature anomalies (SSTA) and errors in the COLA momentum flux cause the erroneous eastward migration of the El Niño-Southern Oscillation (ENSO) events. These conclusions depend on assuming that the error due to the OGCM can be neglected.

  5. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    PubMed

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  6. Experimental determination of solvent-water partition coefficients and Abraham parameters for munition constituents.

    PubMed

    Liang, Yuzhen; Kuo, Dave T F; Allen, Herbert E; Di Toro, Dominic M

    2016-10-01

    There is concern about the environmental fate and effects of munition constituents (MCs). Polyparameter linear free energy relationships (pp-LFERs) that employ Abraham solute parameters can aid in evaluating the risk of MCs to the environment. However, poor predictions using pp-LFERs and ABSOLV estimated Abraham solute parameters are found for some key physico-chemical properties. In this work, the Abraham solute parameters are determined using experimental partition coefficients in various solvent-water systems. The compounds investigated include hexahydro-1,3,5-trinitro-1,3,5-triazacyclohexane (RDX), octahydro-1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane (HMX), hexahydro-1-nitroso-3,5-dinitro-1,3,5-triazine (MNX), hexahydro-1,3,5-trinitroso-1,3,5-triazine (TNX), hexahydro-1,3-dinitroso-5- nitro-1,3,5-triazine (DNX), 2,4,6-trinitrotoluene (TNT), 1,3,5-trinitrobenzene (TNB), and 4-nitroanisole. The solvents in the solvent-water systems are hexane, dichloromethane, trichloromethane, octanol, and toluene. The only available reported solvent-water partition coefficients are for octanol-water for some of the investigated compounds and they are in good agreement with the experimental measurements from this study. Solvent-water partition coefficients fitted using experimentally derived solute parameters from this study have significantly smaller root mean square errors (RMSE = 0.38) than predictions using ABSOLV estimated solute parameters (RMSE = 3.56) for the investigated compounds. Additionally, the predictions for various physico-chemical properties using the experimentally derived solute parameters agree with available literature reported values with prediction errors within 0.79 log units except for water solubility of RDX and HMX with errors of 1.48 and 2.16 log units respectively. However, predictions using ABSOLV estimated solute parameters have larger prediction errors of up to 7.68 log units. This large discrepancy is probably due to the missing R2NNO2 and R2NNO2 functional groups in the ABSOLV fragment database. Copyright © 2016. Published by Elsevier Ltd.

  7. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop images with different hydrophobicity values and volumes.

  8. Phase noise optimization in temporal phase-shifting digital holography with partial coherence light sources and its application in quantitative cell imaging.

    PubMed

    Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert

    2009-03-10

    In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.

  9. Hydrologic Record Extension of Water-Level Data in the Everglades Depth Estimation Network (EDEN) Using Artificial Neural Network Models, 2000-2006

    USGS Publications Warehouse

    Conrads, Paul; Roehl, Edwin A.

    2007-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level gaging stations, ground-elevation models, and water-surface models designed to provide scientists, engineers, and water-resource managers with current (2000-present) water-depth information for the entire freshwater portion of the greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystem Science provides support for EDEN and the goal of providing quality assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. To increase the accuracy of the water-surface models, 25 real-time water-level gaging stations were added to the network of 253 established water-level gaging stations. To incorporate the data from the newly added stations to the 7-year EDEN database in the greater Everglades, the short-term water-level records (generally less than 1 year) needed to be simulated back in time (hindcasted) to be concurrent with data from the established gaging stations in the database. A three-step modeling approach using artificial neural network models was used to estimate the water levels at the new stations. The artificial neural network models used static variables that represent the gaging station location and percent vegetation in addition to dynamic variables that represent water-level data from the established EDEN gaging stations. The final step of the modeling approach was to simulate the computed error of the initial estimate to increase the accuracy of the final water-level estimate. The three-step modeling approach for estimating water levels at the new EDEN gaging stations produced satisfactory results. The coefficients of determination (R2) for 21 of the 25 estimates were greater than 0.95, and all of the estimates (25 of 25) were greater than 0.82. The model estimates showed good agreement with the measured data. For some new EDEN stations with limited measured data, the record extension (hindcasts) included periods beyond the range of the data used to train the artificial neural network models. The comparison of the hindcasts with long-term water-level data proximal to the new EDEN gaging stations indicated that the water-level estimates were reasonable. The percent model error (root mean square error divided by the range of the measured data) was less than 6 percent, and for the majority of stations (20 of 25), the percent model error was less than 1 percent.

  10. Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Tian, Yudong

    2011-01-01

    Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.

  11. Estimating Water and Heat Fluxes with a Four-dimensional Weak-constraint Variational Data Assimilation Approach

    NASA Astrophysics Data System (ADS)

    Bateni, S. M.; Xu, T.

    2015-12-01

    Accurate estimation of water and heat fluxes is required for irrigation scheduling, weather prediction, and water resources planning and management. A weak-constraint variational data assimilation (WC-VDA) scheme is developed to estimate water and heat fluxes by assimilating sequences of land surface temperature (LST) observations. The commonly used strong-constraint VDA systems adversely affect the accuracy of water and heat flux estimates as they assume the model is perfect. The WC-VDA approach accounts for structural and model errors and generates more accurate results via adding a model error term into the surface energy balance equation. The two key unknown parameters of the WC-VDA system (i.e., CHN, the bulk heat transfer coefficient and EF, evaporative fraction) and the model error term are optimized by minimizing the cost function. The WC-VDA model was tested at two sites with contrasting hydrological and vegetative conditions: the Daman site (a wet site located in an oasis area and covered by seeded corn) and the Huazhaizi site (a dry site located in a desert area and covered by sparse grass) in middle stream of Heihe river basin, northwest China. Compared to the strong-constraint VDA system, the WC-VDA method generates more accurate estimates of water and energy fluxes over the desert and oasis sites with dry and wet conditions.

  12. Refractive Errors in Northern China Between the Residents with Drinking Water Containing Excessive Fluorine and Normal Drinking Water.

    PubMed

    Bin, Ge; Liu, Haifeng; Zhao, Chunyuan; Zhou, Guangkai; Ding, Xuchen; Zhang, Na; Xu, Yongfang; Qi, Yanhua

    2016-10-01

    The purpose of this study was to evaluate the refractive errors and the demographic associations between drinking water with excessive fluoride and normal drinking water among residents in Northern China. Of the 1843 residents, 1415 (aged ≥40 years) were divided into drinking-water-excessive fluoride (DWEF) group (>1.20 mg/L) and control group (≤1.20 mg/L) on the basis of the fluoride concentrations in drinking water. Of the 221 subjects in the DWEF group, with 1.47 ± 0.25 mg/L (fluoride concentrations in drinking water), the prevalence rates of myopia, hyperopia, and astigmatism were 38.5 % (95 % confidence interval [CI] = 32.1-45.3), 19.9 % (95 % CI = 15-26), and 41.6 % (95 % CI = 35.1-48.4), respectively. Of the 1194 subjects in the control group with 0.20 ± 0.18 mg/L, the prevalence of myopia, hyperopia, and astigmatism were 31.5 % (95 % CI = 28.9-34.2), 27.6 % (95 % CI = 25.1-30.3), and 45.6 % (95 % CI = 42.8-48.5), respectively. A statistically significant difference was not observed in the association of spherical equivalent and fluoride concentrations in drinking water (P = 0.84 > 0.05). This report provides the data of the refractive state of the residents consuming drinking water with excess amounts of fluoride in northern China. The refractive errors did not result from ingestion of mild excess amounts of fluoride in the drinking water.

  13. Uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model at multiple flux tower sites

    USGS Publications Warehouse

    Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.

    2016-01-01

    Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).

  14. Application of receptor models on water quality data in source apportionment in Kuantan River Basin

    PubMed Central

    2012-01-01

    Recent techniques in the management of surface river water have been expanding the demand on the method that can provide more representative of multivariate data set. A proper technique of the architecture of artificial neural network (ANN) model and multiple linear regression (MLR) provides an advance tool for surface water modeling and forecasting. The development of receptor model was applied in order to determine the major sources of pollutants at Kuantan River Basin, Malaysia. Thirteen water quality parameters were used in principal component analysis (PCA) and new variables of fertilizer waste, surface runoff, anthropogenic input, chemical and mineral changes and erosion are successfully developed for modeling purposes. Two models were compared in terms of efficiency and goodness-of-fit for water quality index (WQI) prediction. The results show that APCS-ANN model gives better performance with high R2 value (0.9680) and small root mean square error (RMSE) value (2.6409) compared to APCS-MLR model. Meanwhile from the sensitivity analysis, fertilizer waste acts as the dominant pollutant contributor (59.82%) to the basin studied followed by anthropogenic input (22.48%), surface runoff (13.42%), erosion (2.33%) and lastly chemical and mineral changes (1.95%). Thus, this study concluded that receptor modeling of APCS-ANN can be used to solve various constraints in environmental problem that exist between water distribution variables toward appropriate water quality management. PMID:23369363

  15. Density currents in the Chicago River: Characterization, effects on water quality, and potential sources

    USGS Publications Warehouse

    Jackson, P. Ryan; Garcia, Carlos M.; Oberg, Kevin A.; Johnson, Kevin K.; Garcia, Marcelo H.

    2008-01-01

    Bidirectional flows in a river system can occur under stratified flow conditions and in addition to creating significant errors in discharge estimates, the upstream propagating currents are capable of transporting contaminants and affecting water quality. Detailed field observations of bidirectional flows were made in the Chicago River in Chicago, Illinois in the winter of 2005-06. Using multiple acoustic Doppler current profilers simultaneously with a water-quality profiler, the formation of upstream propagating density currents within the Chicago River both as an underflow and an overflow was observed on three occasions. Density differences driving the flow primarily arise from salinity differences between intersecting branches of the Chicago River, whereas water temperature is secondary in the creation of these currents. Deicing salts appear to be the primary source of salinity in the North Branch of the Chicago River, entering the waterway through direct runoff and effluent from a wastewater-treatment plant in a large metropolitan area primarily served by combined sewers. Water-quality assessments of the Chicago River may underestimate (or overestimate) the impairment of the river because standard water-quality monitoring practices do not account for density-driven underflows (or overflows). Chloride concentrations near the riverbed can significantly exceed concentrations at the river surface during underflows indicating that full-depth parameter profiles are necessary for accurate water-quality assessments in urban environments where application of deicing salt is common.

  16. Deriving depths of deep chlorophyll maximum and water inherent optical properties: A regional model

    NASA Astrophysics Data System (ADS)

    Xiu, Peng; Liu, Yuguang; Li, Gang; Xu, Qing; Zong, Haibo; Rong, Zengrui; Yin, Xiaobin; Chai, Fei

    2009-10-01

    The Bohai Sea is a semi-enclosed inland sea with case-2 waters near the coast. A comprehensive set of optical data was collected during three cruises in June, August, and September 2005 in the Bohai Sea. The vertical profile measurements, such as chlorophyll concentration, water turbidity, downwelling irradiance, and diffuse attenuation coefficient, showed that the Bohai Sea was vertically stratified with a relative clear upper layer superimposed on a turbid lower layer. The upper layer was found to correspond to the euphotic zone and the deep chlorophyll maximum (DCM) occurs at the base of this layer. By tuning a semi-analytical model (Lee et al., 1998, 1999) for the Bohai Sea, we developed a method to derive water inherent optical properties and the depth of DCM from above-surface measurements. Assuming a 'fake' bottom in the stratified water, this new method retrieves the 'fake' bottom depth, which is highly correlated with the DCM depth. The average relative error between derived and measured values is 33.9% for phytoplankton absorption at 440 nm, 25.6% for colored detrital matter (detritus plus gelbstoff) absorption at 440 nm, and 24.2% for the DCM depth. This modified method can retrieve water inherent optical properties and monitor the depth of DCM in the Bohai Sea, and the method is also applicable to other stratified waters.

  17. Katherine Young, P.E. | NREL

    Science.gov Websites

    ) Water rights and resources engineering Database planning and development Research Interests Collection lean principles to streamline exploration and drilling and reduce error/risk Research, development and Groundwater modeling Quantitative methods in water resource engineering Water resource engineering and

  18. Improving Water Quality Assessments through a HierarchicalBayesian Analysis of Variability

    EPA Science Inventory

    Water quality measurement error and variability, while well-documented in laboratory-scale studies, is rarely acknowledged or explicitly resolved in most water body assessments, including those conducted in compliance with the United States Environmental Protection Agency (USEPA)...

  19. 78 FR 77399 - Basic Health Program: Proposed Federal Funding Methodology for Program Year 2015

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-23

    ... American Indians and Alaska Natives F. Example Application of the BHP Funding Methodology III. Collection... effectively 138 percent due to the application of a required 5 percent income disregard in determining the... correct errors in applying the methodology (such as mathematical errors). Under section 1331(d)(3)(ii) of...

  20. Evaluation of the apparent losses caused by water meter under-registration in intermittent water supply.

    PubMed

    Criminisi, A; Fontanazza, C M; Freni, G; Loggia, G La

    2009-01-01

    Apparent losses are usually caused by water theft, billing errors, or revenue meter under-registration. While the first two causes are directly related to water utility management and may be reduced by improving company procedures, water meter inaccuracies are considered to be the most significant and hardest to quantify. Water meter errors are amplified in networks subjected to water scarcity, where users adopt private storage tanks to cope with the intermittent water supply. The aim of this paper is to analyse the role of two variables influencing the apparent losses: water meter age and the private storage tank effect on meter performance. The study was carried out in Palermo (Italy). The impact of water meter ageing was evaluated in laboratory by testing 180 revenue meters, ranging from 0 to 45 years in age. The effects of the private water tanks were determined via field monitoring of real users and a mathematical model. This study demonstrates that the impact on apparent losses from the meter starting flow rapidly increases with meter age. Private water tanks, usually fed by a float valve, overstate meter under-registration, producing additional apparent losses between 15% and 40% for the users analysed in this study.

  1. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  2. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  3. Spectral contaminant identifier for off-axis integrated cavity output spectroscopy measurements of liquid water isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Leen, J.; Berman, Elena S. F.; Gupta, Manish

    Developments in cavity-enhanced absorption spectrometry have made it possible to measure water isotopes using faster, more cost-effective field-deployable instrumentation. Several groups have attempted to extend this technology to measure water extracted from plants and found that other extracted organics absorb light at frequencies similar to that absorbed by the water isotopomers, leading to {delta}{sup 2}H and {delta}{sup 18}O measurement errors ({Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O). In this note, the off-axis integrated cavity output spectroscopy (ICOS) spectra of stable isotopes in liquid water is analyzed to determine the presence of interfering absorbers that lead to erroneous isotope measurements. The baseline offsetmore » of the spectra is used to calculate a broadband spectral metric, m{sub BB}, and the mean subtracted fit residuals in two regions of interest are used to determine a narrowband metric, m{sub NB}. These metrics are used to correct for {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O. The method was tested on 14 instruments and {Delta}{delta}{sup 18}O was found to scale linearly with contaminant concentration for both narrowband (e.g., methanol) and broadband (e.g., ethanol) absorbers, while {Delta}{delta}{sup 2}H scaled linearly with narrowband and as a polynomial with broadband absorbers. Additionally, the isotope errors scaled logarithmically with m{sub NB}. Using the isotope error versus m{sub NB} and m{sub BB} curves, {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O resulting from methanol contamination were corrected to a maximum mean absolute error of 0.93 per mille and 0.25 per mille respectively, while {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O from ethanol contamination were corrected to a maximum mean absolute error of 1.22 per mille and 0.22 per mille . Large variation between instruments indicates that the sensitivities must be calibrated for each individual isotope analyzer. These results suggest that the properly calibrated interference metrics can be used to correct for polluted samples and extend off-axis ICOS measurements of liquid water to include plant waters, soil extracts, wastewater, and alcoholic beverages. The general technique may also be extended to other laser-based analyzers including methane and carbon dioxide isotope sensors.« less

  4. Parameterization of bulk condensation in numerical cloud models

    NASA Technical Reports Server (NTRS)

    Kogan, Yefim L.; Martin, William J.

    1994-01-01

    The accuracy of the moist saturation adjustment scheme has been evaluated using a three-dimensional explicit microphysical cloud model. It was found that the error in saturation adjustment depends strongly on the Cloud Condensation Nucleii (CCN) concentration in the ambient atmosphere. The scheme provides rather accurate results in the case where a sufficiently large number of CCN (on the order of several hundred per cubic centimeter) is available. However, under conditions typical of marine stratocumulus cloud layers with low CCN concentration, the error in the amounts of condensed water vapor and released latent heat may be as large as 40%-50%. A revision of the saturation adjustment scheme is devised that employs the CCN concentration, dynamical supersaturation, and cloud water content as additional variables in the calculation of the condensation rate. The revised condensation model reduced the error in maximum updraft and cloud water content in the climatically significant case of marine stratocumulus cloud layers by an order of magnitude.

  5. Digital Paper Technologies for Topographical Applications

    DTIC Science & Technology

    2011-09-19

    measures examine were training time for each method, time for entry offeatures, procedural errors, handwriting recognition errors, and user preference...time for entry of features, procedural errors, handwriting recognition errors, and user preference. For these metrics, temporal association was...checkbox, text restricted to a specific list of values, etc.) that provides constraints to the handwriting recognizer. When the user fills out the form

  6. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  7. Estimating groundwater evapotranspiration by a subtropical pine plantation using diurnal water table fluctuations: Implications from night-time water use

    NASA Astrophysics Data System (ADS)

    Fan, Junliang; Ostergaard, Kasper T.; Guyot, Adrien; Fujiwara, Stephen; Lockington, David A.

    2016-11-01

    Exotic pine plantations have replaced large areas of the native forests for timber production in the subtropical coastal Australia. To evaluate potential impacts of changes in vegetation on local groundwater discharge, we estimated groundwater evapotranspiration (ETg) by the pine plantation using diurnal water table fluctuations for the dry season of 2012 from August 1st to December 31st. The modified White method was used to estimate the ETg, considering the night-time water use by pine trees (Tn). Depth-dependent specific yields were also determined both experimentally and numerically for estimation of ETg. Night-time water use by pine trees was comprehensively investigated using a combination of groundwater level, sap flow, tree growth, specific yield, soil matric potential and climatic variables measurements. Results reveal a constant average transpiration flux of 0.02 mm h-1 at the plot scale from 23:00 to 05:00 during the study period, which verified the presence of night-time water use. The total ETg for the period investigated was 259.0 mm with an accumulated Tn of 64.5 mm, resulting in an error of 25% on accumulated evapotranspiration from the groundwater if night-time water use was neglected. The results indicate that the development of commercial pine plantations may result in groundwater losses in these areas. It is also recommended that any future application of diurnal water table fluctuation based methods investigate the validity of the zero night-time water use assumption prior to use.

  8. Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks

    EPA Pesticide Factsheets

    This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication:Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).

  9. Reservoir water level forecasting using group method of data handling

    NASA Astrophysics Data System (ADS)

    Zaji, Amir Hossein; Bonakdari, Hossein; Gharabaghi, Bahram

    2018-06-01

    Accurately forecasted reservoir water level is among the most vital data for efficient reservoir structure design and management. In this study, the group method of data handling is combined with the minimum description length method to develop a very practical and functional model for predicting reservoir water levels. The models' performance is evaluated using two groups of input combinations based on recent days and recent weeks. Four different input combinations are considered in total. The data collected from Chahnimeh#1 Reservoir in eastern Iran are used for model training and validation. To assess the models' applicability in practical situations, the models are made to predict a non-observed dataset for the nearby Chahnimeh#4 Reservoir. According to the results, input combinations (L, L -1) and (L, L -1, L -12) for recent days with root-mean-squared error (RMSE) of 0.3478 and 0.3767, respectively, outperform input combinations (L, L -7) and (L, L -7, L -14) for recent weeks with RMSE of 0.3866 and 0.4378, respectively, with the dataset from https://www.typingclub.com/st. Accordingly, (L, L -1) is selected as the best input combination for making 7-day ahead predictions of reservoir water levels.

  10. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  11. Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method

    USDA-ARS?s Scientific Manuscript database

    Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...

  12. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  13. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  14. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  15. Estimation of wave phase speed and nearshore bathymetry from video imagery

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.

    2000-01-01

    A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.

  16. Moving horizon estimation for assimilating H-SAF remote sensing data into the HBV hydrological model

    NASA Astrophysics Data System (ADS)

    Montero, Rodolfo Alvarado; Schwanenberg, Dirk; Krahe, Peter; Lisniak, Dmytro; Sensoy, Aynur; Sorman, A. Arda; Akkol, Bulut

    2016-06-01

    Remote sensing information has been extensively developed over the past few years including spatially distributed data for hydrological applications at high resolution. The implementation of these products in operational flow forecasting systems is still an active field of research, wherein data assimilation plays a vital role on the improvement of initial conditions of streamflow forecasts. We present a novel implementation of a variational method based on Moving Horizon Estimation (MHE), in application to the conceptual rainfall-runoff model HBV, to simultaneously assimilate remotely sensed snow covered area (SCA), snow water equivalent (SWE), soil moisture (SM) and in situ measurements of streamflow data using large assimilation windows of up to one year. This innovative application of the MHE approach allows to simultaneously update precipitation, temperature, soil moisture as well as upper and lower zones water storages of the conceptual model, within the assimilation window, without an explicit formulation of error covariance matrixes and it enables a highly flexible formulation of distance metrics for the agreement of simulated and observed variables. The framework is tested in two data-dense sites in Germany and one data-sparse environment in Turkey. Results show a potential improvement of the lead time performance of streamflow forecasts by using perfect time series of state variables generated by the simulation of the conceptual rainfall-runoff model itself. The framework is also tested using new operational data products from the Satellite Application Facility on Support to Operational Hydrology and Water Management (H-SAF) of EUMETSAT. This study is the first application of H-SAF products to hydrological forecasting systems and it verifies their added value. Results from assimilating H-SAF observations lead to a slight reduction of the streamflow forecast skill in all three cases compared to the assimilation of streamflow data only. On the other hand, the forecast skill of soil moisture shows a significant improvement.

  17. Estimation des paramètres d'un modèle hydrologique mixte appliqué à la région du haut plateau Bolivien

    NASA Astrophysics Data System (ADS)

    Gárfias, Jaime; Verrette, Jean-Louis; Antigüedad, Iñaki; André, Cécile

    1996-03-01

    This paper discusses the development and application of a technique which permits the analysis and improvement of hydrological models for the management of water resources of complex systems. Considering that such models are intended for practical application, the model was applied to the conditions of the Bolivian highlands. The model consisted of a deterministic part (HEC-1 model) linked to a stochastic component. The experience acquired indicated the possibility of adapting a more general procedure to compensate for the lack of rigour in the homoscedastic and independence hypothesis of the residuals. Use of this concept improved the estimation accuracy of the parameters and provided independent residuals with constant variance. A Box-Cox transformation was used to stabilize error variance and an autoregressive model was used to remove autocorrelation in the residuals.

  18. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  19. Promoting inclusive water governance and forecasting the structure of water consumption based on compositional data: A case study of Beijing.

    PubMed

    Wei, Yigang; Wang, Zhichao; Wang, Huiwen; Yao, Tang; Li, Yan

    2018-09-01

    Water is centrally important for agricultural security, environment, people's livelihoods, and socio-economic development, particularly in the face of extreme climate changes. Due to water shortages in many cities, the conflicts between various stakeholders and sectors over water use and allocation are becoming more common and intense. Effective inclusive governance of water use is critical for relieving water use conflicts. In addition, reliable forecasting of the structure of water usage among different sectors is a basic need for effective water governance planning. Although a large number of studies have attempted to forecast water use, little is known about the forecasted structure and trends of water use in the future. This paper aims to develop a forecasting model for the structure of water usage based on compositional data. Compositional data analysis is an effective approach for investigating the internal structure of a system. A host of data transformation methods and forecasting models were adopted and compared in order to derive the best-performing model. According to mean absolute percent error for compositional data (CoMAPE), a hyperspherical-transformation-based vector autoregression model for compositional data (VAR-DRHT) is the best-performing model. The proportions of the agricultural, industrial, domestic and environmental water will be 6.11%, 5.01%, 37.48% and 51.4% by 2020. Several recommendations for water inclusive development are provided to give a better account for the optimization of the water use structure, alleviation of water shortages, and improving stake holders' wellbeing. Overall, although we focus on groundwater, this study presents a powerful framework broadly applicable to resource management. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Lagrangian water mass tracing from pseudo-Argo, model-derived salinity, tracer and velocity data: An application to Antarctic Intermediate Water in the South Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Blanke, Bruno; Speich, Sabrina; Rusciano, Emanuela

    2015-01-01

    We use the tracer and velocity fields of a climatological ocean model to investigate the ability of Argo-like data to estimate accurately water mass movements and transformations, in the style of analyses commonly applied to the output of ocean general circulation model. To this end, we introduce an algorithm for the reconstruction of a fully non-divergent three-dimensional velocity field from the simple knowledge of the model vertical density profiles and 1000-m horizontal velocity components. The validation of the technique consists in comparing the resulting pathways for Antarctic Intermediate Water in the South Atlantic Ocean to equivalent reference results based on the full model information available for velocity and tracers. We show that the inclusion of a wind-induced Ekman pumping and of a well-thought-out expression for vertical velocity at the level of the intermediate waters is essential for the reliable reproduction of quantitative Lagrangian analyses. Neglecting the seasonal variability of the velocity and tracer fields is not a significant source of errors, at least well below the permanent thermocline. These results give us confidence in the success of the adaptation of the algorithm to true gridded Argo data for investigating the dynamics of flows in the ocean interior.

  1. Effect of Viscosity on the Crystallization of Undercooled Liquids

    NASA Technical Reports Server (NTRS)

    2003-01-01

    There have been numerous studies of glasses indicating that low-gravity processing enhances glass formation. NASA PI s are investigating the effect of low-g processing on the nucleation and crystal growth rates. Dr. Ethridge is investigating a potential mechanism for glass crystallization involving shear thinning of liquids in 1-g. For shear thinning liquids, low-g (low convection) processing will enhance glass formation. The study of the viscosity of glass forming substances at low shear rates is important to understand these new crystallization mechanisms. The temperature dependence of the viscosity of undercooled liquids is also very important for NASA s containerless processing studies. In general, the viscosity of undercooled liquids is not known, yet knowledge of viscosity is required for crystallization calculations. Many researchers have used the Turnbull equation in error. Subsequent nucleation and crystallization calculations can be in error by many orders of magnitude. This demonstrates the requirement for better methods for interpolating and extrapolating the viscosity of undercooled liquids. This is also true for undercooled water. Since amorphous water ice is the predominant form of water in the universe, astrophysicists have modeled the crystallization of amorphous water ice with viscosity relations that may be in error by five orders-of-magnitude.

  2. Effect of Water Immersion on Dual-task Performance: Implications for Aquatic Therapy.

    PubMed

    Schaefer, Sydney Y; Louder, Talin J; Foster, Shayla; Bressel, Eadric

    2016-09-01

    Much is known about cardiovascular and biomechanical responses to exercise during water immersion, yet an understanding of the higher-order neural responses to water immersion is unclear. The purpose of this study was to compare cognitive and motor performance between land and water environments using a dual-task paradigm, which served as an indirect measure of cortical processing. A quasi-experimental crossover research design is used. Twenty-two healthy participants (age = 24.3 ± 5.24 years) and a single-case patient (age = 73) with mild cognitive impairment performed a cognitive (auditory vigilance) and motor (standing balance) task separately (single-task condition) and simultaneously (dual-task condition) on land and in chest-deep water. Listening errors from the auditory vigilance task and centre of pressure (CoP) area for the balance task measured cognitive and motor performance, respectively. Listening errors for the single-task and dual-task conditions were 42% and 45% lower for the water than land condition, respectively (effect size [ES] = 0.38 and 0.55). CoP area for the single-task and dual-task conditions, however, were 115% and 164% lower on land than in water, respectively, and were lower (≈8-33%) when balancing concurrently with the auditory vigilance task compared with balancing alone, regardless of environment (ES = 0.23-1.7). This trend was consistent for the single-case patient. Participants tended to make fewer 'cognitive' errors while immersed chest-deep in water than on land. These same participants also tended to display less postural sway under dual-task conditions, but more in water than on land. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Quantitation Error in 1H MRS Caused by B1 Inhomogeneity and Chemical Shift Displacement.

    PubMed

    Watanabe, Hidehiro; Takaya, Nobuhiro

    2017-11-08

    The quantitation accuracy in proton magnetic resonance spectroscopy ( 1 H MRS) improves at higher B 0 field. However, a larger chemical shift displacement (CSD) and stronger B 1 inhomogeneity exist. In this work, we evaluate the quantitation accuracy for the spectra of metabolite mixtures in phantom experiments at 4.7T. We demonstrate a position-dependent error in quantitation and propose a correction method by measuring water signals. All experiments were conducted on a whole-body 4.7T magnetic resonance (MR) system with a quadrature volume coil for transmission and reception. We arranged three bottles filled with metabolite solutions of N-acetyl aspartate (NAA) and creatine (Cr) in a vertical row inside a cylindrical phantom filled with water. Peak areas of three singlets of NAA and Cr were measured on three 1 H spectra at three volume of interests (VOIs) inside three bottles. We also measured a series of water spectra with a shifted carrier frequency and measured a reception sensitivity map. The ratios of NAA and Cr at 3.92 ppm to Cr at 3.01 ppm differed amongst the three VOIs in peak area, which leads to a position-dependent error. The nature of slope depicting the relationship between peak areas and the shifted values of frequency was like that between the reception sensitivities and displacement at every VOI. CSD and inhomogeneity of reception sensitivity cause amplitude modulation along the direction of chemical shift on the spectra, resulting in a quantitation error. This error may be more significant at higher B 0 field where CSD and B 1 inhomogeneity are more severe. This error may also occur in reception using a surface coil having inhomogeneous B 1 . Since this type of error is around a few percent, the data should be analyzed with greater attention while discussing small differences in the studies of 1 H MRS.

  4. Water Level Prediction of Lake Cascade Mahakam Using Adaptive Neural Network Backpropagation (ANNBP)

    NASA Astrophysics Data System (ADS)

    Mislan; Gaffar, A. F. O.; Haviluddin; Puspitasari, N.

    2018-04-01

    A natural hazard information and flood events are indispensable as a form of prevention and improvement. One of the causes is flooding in the areas around the lake. Therefore, forecasting the surface of Lake water level to anticipate flooding is required. The purpose of this paper is implemented computational intelligence method namely Adaptive Neural Network Backpropagation (ANNBP) to forecasting the Lake Cascade Mahakam. Based on experiment, performance of ANNBP indicated that Lake water level prediction have been accurate by using mean square error (MSE) and mean absolute percentage error (MAPE). In other words, computational intelligence method can produce good accuracy. A hybrid and optimization of computational intelligence are focus in the future work.

  5. Climate model biases in seasonality of continental water storage revealed by satellite gravimetry

    USGS Publications Warehouse

    Swenson, Sean; Milly, P.C.D.

    2006-01-01

    Satellite gravimetric observations of monthly changes in continental water storage are compared with outputs from five climate models. All models qualitatively reproduce the global pattern of annual storage amplitude, and the seasonal cycle of global average storage is reproduced well, consistent with earlier studies. However, global average agreements mask systematic model biases in low latitudes. Seasonal extrema of low‐latitude, hemispheric storage generally occur too early in the models, and model‐specific errors in amplitude of the low‐latitude annual variations are substantial. These errors are potentially explicable in terms of neglected or suboptimally parameterized water stores in the land models and precipitation biases in the climate models.

  6. Error analyses of JEM/SMILES standard products on L2 operational system

    NASA Astrophysics Data System (ADS)

    Mitsuda, C.; Takahashi, C.; Suzuki, M.; Hayashi, H.; Imai, K.; Sano, T.; Takayanagi, M.; Iwata, Y.; Taniguchi, H.

    2009-12-01

    SMILES (Superconducting Submillimeter-wave Limb-Emission Sounder) , which has been developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), is planned to be launched in September, 2009 and will be on board the Japanese Experiment Module (JEM) of the International Space Station (ISS). The SMILES measures the atmospheric limb emission from stratospheric minor constituents in 640 GHz band. Target species on L2 operational system are O3, ClO, HCl, HNO3, HOCl, CH3CN, HO2, BrO, and O3 isotopes (18OOO, 17OOO and O17OO). The SMILES carries 4 K cooled Superconductor-Insulator-Superconductor mixers to carry out high-sensitivity observations. In sub-millimeter band, water vapor absorption is an important factor to decide the tropospheric and stratospheric brightness temperature. The uncertainty of water vapor absorption influences the accuracy of molecular vertical profiles. Since the SMILES bands are narrow and far from H2O lines, it is a good approximation to assume this uncertainly as linear function of frequency. We include 0th and 1st coefficients of ‘baseline’ function, not water vapor profile, in state vector and retrieve them to remove influence of the water vapor uncertainty. We performed retrieval simulations using spectra computed by L2 operatinal forward model for various H2O conditions (-/+ 5, 10% difference between true profile and a priori profile in the stratosphere and -/+ 10, 20% one in the troposphere). The results show that the incremental errors of molecules are smaller than 10% of measurements errors when height correlation of baseline coefficients and temperature are assumed to be 10 km. In conclusion, the retrieval of the baseline coefficients effectively suppresses profile error due to bias of water vapor profile.

  7. On sweat analysis for quantitative estimation of dehydration during physical exercise.

    PubMed

    Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M

    2015-08-01

    Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.

  8. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model.

    PubMed

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  9. Prediction of stream volatilization coefficients

    USGS Publications Warehouse

    Rathbun, Ronald E.

    1990-01-01

    Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.

  10. SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...

  11. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  12. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  13. Validation, Edits, and Application Processing Phase II and Error-Prone Model Report.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    The impact of quality assurance procedures on the correct award of Basic Educational Opportunity Grants (BEOGs) for 1979-1980 was assessed, and a model for detecting error-prone applications early in processing was developed. The Bureau of Student Financial Aid introduced new comments into the edit system in 1979 and expanded the pre-established…

  14. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  15. Wear consideration in gear design for space applications

    NASA Technical Reports Server (NTRS)

    Akin, Lee S.; Townsend, Dennis P.

    1989-01-01

    A procedure is described that was developed for evaluating the wear in a set of gears in mesh under high load and low rotational speed. The method can be used for any low-speed gear application, with nearly negligible oil film thickness, and is especially useful in space stepping mechanism applications where determination of pointing error due to wear is important, such as in long life sensor antenna drives. A method is developed for total wear depth at the ends of the line of action using a very simple formula with the slide to roll ratio V sub s/V sub r. A method is also developed that uses the wear results to calculate the transmission error also known as pointing error of a gear mesh.

  16. Using multi-source satellite data for lake level modelling in ungauged basins: A case study for Lake Turkana, East Africa

    USGS Publications Warehouse

    Velpuri, N.M.; Senay, G.B.; Asante, K.O.

    2011-01-01

    Managing limited surface water resources is a great challenge in areas where ground-based data are either limited or unavailable. Direct or indirect measurements of surface water resources through remote sensing offer several advantages of monitoring in ungauged basins. A physical based hydrologic technique to monitor lake water levels in ungauged basins using multi-source satellite data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, a digital elevation model, and other data is presented. This approach is applied to model Lake Turkana water levels from 1998 to 2009. Modelling results showed that the model can reasonably capture all the patterns and seasonal variations of the lake water level fluctuations. A composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data is used for model calibration (1998-2000) and model validation (2001-2009). Validation results showed that model-based lake levels are in good agreement with observed satellite altimetry data. Compared to satellite altimetry data, the Pearson's correlation coefficient was found to be 0.81 during the validation period. The model efficiency estimated using NSCE is found to be 0.93, 0.55 and 0.66 for calibration, validation and combined periods, respectively. Further, the model-based estimates showed a root mean square error of 0.62 m and mean absolute error of 0.46 m with a positive mean bias error of 0.36 m for the validation period (2001-2009). These error estimates were found to be less than 15 % of the natural variability of the lake, thus giving high confidence on the modelled lake level estimates. The approach presented in this paper can be used to (a) simulate patterns of lake water level variations in data scarce regions, (b) operationally monitor lake water levels in ungauged basins, (c) derive historical lake level information using satellite rainfall and evapotranspiration data, and (d) augment the information provided by the satellite altimetry systems on changes in lake water levels. ?? Author(s) 2011.

  17. The Refurbishment and Upgrade of the Atmospheric Radiation Measurement Raman Lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.D.; Goldsmith, J.E.M.

    The Atmospheric Radiation Measurement Program (ARM) Climate Research Facility (ACRF) Raman lidar (CARL) is an autonomous, turn-key system that profiles water vapor, aerosols, and clouds throughout the diurnal cycle for days without attention (Goldsmith et al. 1998). CARL was first deployed to the Southern Great Plains CRF during the summer of 1996 and participated in the 1996 and 1997 water vapor intensive operational periods (IOPs). Since February 1998, the system has collected over 38,000 hrs of data (equivalent of almost 4.4 years), with an average monthly uptime of 62% during this time period. This unprecedented performance by CARL makes itmore » the premier operational Raman lidar in the world. Unfortunately, CARL began degrading in early 2002. This loss of sensitivity, which affected all observed variables, was very gradual and thus was not identified until the autumn of 2003. Analysis of the data suggested the problem was not associated with the laser or transmit portion of the system, but rather in the detection subsystem, as both the background values and the peak signals showed a marked decreases over this time period. The loss of sensitivity of a factor of 2-4, depending on the channel, resulted in higher random error in the retrieved products, such as the aerosol backscatter coefficient and water vapor mixing ratio. Figure 1 shows the random error at 2 km for aerosol backscatter coefficient (top) and water vapor mixing ratio (middle), in terms of percent of the signal for both average daytime (red) and nighttime (blue) data from 1998 to 2005. The seasonal variation of water vapor is easily seen in the random error in the water vapor mixing ratio data. The loss of sensitivity also affected the maximum range of the usable data, as illustrated by the dramatic decrease in the maximum height seen in the water vapor mixing ratio data (bottom). This degradation, which results in much larger random errors, greatly hinders the analysis of data sets such as the Aerosol IOP (March 2003) and the AIRS Water Vapor Experiment (December 2003). The degradation and its impact on the Aerosol IOP analysis are reported in Ferrare et al. 2005.« less

  18. A system to measure the data quality of spectral remote-sensing reflectance of aquatic environments

    NASA Astrophysics Data System (ADS)

    Wei, Jianwei; Lee, Zhongping; Shang, Shaoling

    2016-11-01

    Spectral remote-sensing reflectance (Rrs, sr-1) is the key for ocean color retrieval of water bio-optical properties. Since Rrs from in situ and satellite systems are subject to errors or artifacts, assessment of the quality of Rrs data is critical. From a large collection of high quality in situ hyperspectral Rrs data sets, we developed a novel quality assurance (QA) system that can be used to objectively evaluate the quality of an individual Rrs spectrum. This QA scheme consists of a unique Rrs spectral reference and a score metric. The reference system includes Rrs spectra of 23 optical water types ranging from purple blue to yellow waters, with an upper and a lower bound defined for each water type. The scoring system is to compare any target Rrs spectrum with the reference and a score between 0 and 1 will be assigned to the target spectrum, with 1 for perfect Rrs spectrum and 0 for unusable Rrs spectrum. The effectiveness of this QA system is evaluated with both synthetic and in situ Rrs spectra and it is found to be robust. Further testing is performed with the NOMAD data set as well as with satellite Rrs over coastal and oceanic waters, where questionable or likely erroneous Rrs spectra are shown to be well identifiable with this QA system. Our results suggest that applications of this QA system to in situ data sets can improve the development and validation of bio-optical algorithms and its application to ocean color satellite data can improve the short-term and long-term products by objectively excluding questionable Rrs data.

  19. Assessment and application of AirMSPI high-resolution multiangle imaging photo-polarimetric observations for atmospheric correction

    NASA Astrophysics Data System (ADS)

    Kalashnikova, O. V.; Xu, F.; Garay, M. J.; Seidel, F. C.; Diner, D. J.

    2016-02-01

    Water-leaving radiance comprises less than 10% of the signal measured from space, making correction for absorption and scattering by the intervening atmosphere imperative. Modern improvements have been developed in ocean color retrieval algorithms to handle absorbing aerosols such as urban particulates in coastal areas and transported desert dust over the open ocean. In addition, imperfect knowledge of the absorbing aerosol optical properties or their height distribution results in well-documented sources of error in the retrieved water leaving radiance. Multi-angle spectro-polarimetric measurements have been advocated as an additional tool to better understand and retrieve the aerosol properties needed for atmospheric correction for ocean color retrievals. The Airborne Multiangle SpectroPolarimetric Imager-1 (AirMSPI-1) has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. AirMSPI typically acquires observations of a target area at 9 view angles between ±67° at 10 m resolution. AirMSPI spectral channels are centered at 355, 380, 445, 470, 555, 660, and 865 nm, with 470, 660, and 865 reporting linear polarization. We have developed a retrieval code that employs a coupled Markov Chain (MC) and adding/doubling radiative transfer method for joint retrieval of aerosol properties and water leaving radiance from AirMSPI polarimetric observations. We tested prototype retrievals by comparing the retrieved aerosol concentration, size distribution, water-leaving radiance, and chlorophyll concentrations to values reported by the USC SeaPRISM AERONET-OC site off the coast of California. The retrieval then was applied to a variety of costal regions in California to evaluate variability in the water-leaving radiance under different atmospheric conditions. We will present results, and will discuss algorithm sensitivity and potential applications for future space-borne coastal monitoring.

  20. Eccentricity error identification and compensation for high-accuracy 3D optical measurement

    PubMed Central

    He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z

    2016-01-01

    The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation. PMID:26900265

Top