Science.gov

Sample records for dems method evaluation

  1. Evaluation of DEM generation based on Interferometric SAR using TanDEM-X data in Tokyo

    NASA Astrophysics Data System (ADS)

    Avtar, Ram; Yunus, Ali P.; Kraines, Steven; Yamamuro, Masumi

    This study is focused on the evaluation of a Digital Elevation Model (DEM) for Tokyo, Japan from data collected by the recently launched TerraSAR add-on for Digital Elevation Measurements (TanDEM-X), satellite of the German Aerospace Center (DLR). The aim of the TanDEM-X mission is to use Interferometric SAR techniques to generate a consistent high resolution global DEM dataset. In order to generate an accurate global DEM using TanDEM-X data, it is important to evaluate the accuracy at different sites around the world. Here, we report our efforts to generate a high-resolution DEM of the Tokyo metropolitan region using TanDEM-X data. We also compare the TanDEM-X DEM with other existing DEMs for the Tokyo region. Statistical techniques were used to calculate the elevation differences between the TanDEM-X DEM and the reference data. Two high-resolution LiDAR DEMs are used as independent reference data. The vertical accuracy of the TanDEM-X DEM evaluated using the Root Mean Square Error (RMSE) is considerably higher than the existing global digital elevation models. However, the local area DEM generated by Geospatial Information Authority of Japan (GSI DEM) showed the highest accuracy among all non-LiDAR DEM's. The vertical accuracy in terms of RMSE estimated using the 2 m LiDAR as reference is 3.20 m for TanDEM-X, 2.44 m for the GSI, 7.00 m for SRTM DEM and 10.24 m for ASTER-GDEM. We also compared the accuracy of TanDEM-X with the other DEMs for different types of land cover classes. The results show that the absolute elevation error of TanDEM-X is higher for urban and vegetated areas, likewise to those observed for other global DEM's. This is probably because the radar signals used by TanDEM-X tend to measure the first reflective surface that is encountered, which is often the top of the buildings or canopy. Hence, the TanDEM-X based DEM is more akin to a Digital Surface Model (DSM).

  2. Performance Evaluation of Four DEM-Based Fluvial Terrace Mapping Methods Across Variable Geomorphic Settings: Application to the Sheepscot River Watershed, Maine

    NASA Astrophysics Data System (ADS)

    Hopkins, A. J.; Snyder, N. P.

    2014-12-01

    Fluvial terraces are utilized in geomorphic studies as recorders of land-use, climate, and tectonic history. Advances in digital topographic data, such as high-resolution digital elevation models (DEMs) derived from airborne lidar surveys, has promoted the development of several methods used to extract terraces from DEMs based on their characteristic morphology. The post-glacial landscape of the Sheepscot River watershed, Maine, where strath and fill terraces are present and record Pleistocene deglaciation, Holocene eustatic forcing, and Anthropocene land-use change, was selected to implement a comparison between terrace mapping methodologies. At four study sites within the watershed, terraces were manually mapped to facilitate the comparison between fully and semi-automated DEM-based mapping procedures, including: (1) edge detection functions in Matlab, (2) feature classification algorithms developed by Wood (1996), (3) spatial relationships between interpreted terraces and surrounding topography (Walter et al., 2007), and (4) the TerEx terrace mapping toolbox developed by Stout and Belmont (2014). Each method was evaluated based on its accuracy and ease of implementation. The four study sites have varying longitudinal slope (0.1% - 5%), channel width (<5 m - 30 m), relief in surrounding landscape (15 m - 75 m), type and density of surrounding land use, and mapped surficial geologic units. In general, all methods overestimate terrace areas (average predicted area 136% of the manually defined area). Surrounding topographic relief appears to exert the greatest control on mapping accuracy, with the most accurate results (92% of terrace area mapped by Walter et al., 2007 method) achieved where the river valley was most confined by adjacent hillslopes. Accuracy decreased for study sites surrounded by a low-relief landscape, with the most accurate results achieved by the TerEx toolbox (Stout and Belmont, 2014; predicted areas were 45% and 89% of manual delineations

  3. Evaluating Error of LIDAR Derived dem Interpolation for Vegetation Area

    NASA Astrophysics Data System (ADS)

    Ismail, Z.; Khanan, M. F. Abdul; Omar, F. Z.; Rahman, M. Z. Abdul; Mohd Salleh, M. R.

    2016-09-01

    Light Detection and Ranging or LiDAR data is a data source for deriving digital terrain model while Digital Elevation Model or DEM is usable within Geographical Information System or GIS. The aim of this study is to evaluate the accuracy of LiDAR derived DEM generated based on different interpolation methods and slope classes. Initially, the study area is divided into three slope classes: (a) slope class one (0° - 5°), (b) slope class two (6° - 10°) and (c) slope class three (11° - 15°). Secondly, each slope class is tested using three distinctive interpolation methods: (a) Kriging, (b) Inverse Distance Weighting (IDW) and (c) Spline. Next, accuracy assessment is done based on field survey tachymetry data. The finding reveals that the overall Root Mean Square Error or RMSE for Kriging provided the lowest value of 0.727 m for both 0.5 m and 1 m spatial resolutions of oil palm area, followed by Spline with values of 0.734 m for 0.5 m spatial resolution and 0.747 m for spatial resolution of 1 m. Concurrently, IDW provided the highest RMSE value of 0.784 m for both spatial resolutions of 0.5 and 1 m. For rubber area, Spline provided the lowest RMSE value of 0.746 m for 0.5 m spatial resolution and 0.760 m for 1 m spatial resolution. The highest value of RMSE for rubber area is IDW with the value of 1.061 m for both spatial resolutions. Finally, Kriging gave the RMSE value of 0.790m for both spatial resolutions.

  4. An improved method to represent DEM uncertainty in glacial lake outburst flood propagation using stochastic simulations

    NASA Astrophysics Data System (ADS)

    Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan

    2015-10-01

    Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.

  5. Feasibility Analysis of DEM Differential Method on Tree Height Assessment wit Terra-SAR/TanDEM-X Data

    NASA Astrophysics Data System (ADS)

    Zhang, Wangfei; Chen, Erxue; Li, Zengyuan; Feng, Qi; Zhao, Lei

    2016-08-01

    DEM Differential Method is an effective and efficient way for forest tree height assessment with Polarimetric and interferometric technology, however, the assessment accuracy of it is based on the accuracy of interferometric results and DEM. Terra-SAR/TanDEM-X, which established the first spaceborne bistatic interferometer, can provide highly accurate cross-track interferometric images in the whole global without inherent accuracy limitations like temporal decorrelation and atmospheric disturbance. These characters of Terra-SAR/TandDEM-X give great potential for global or regional tree height assessment, which have been constraint by the temporal decorrelation in traditional repeat-pass interferometry. Currently, in China, it will be costly to collect high accurate DEM with Lidar. At the same time, it is also difficult to get truly representative ground survey samples to test and verify the assessment results. In this paper, we analyzed the feasibility of using TerraSAR/TanDEM-X data to assess forest tree height with current free DEM data like ASTER-GDEM and archived ground in-suit data like forest management inventory data (FMI). At first, the accuracy and of ASTER-GDEM and forest management inventory data had been assessment according to the DEM and canopy height model (CHM) extracted from Lidar data. The results show the average elevation RMSE between ASTER-GEDM and Lidar-DEM is about 13 meters, but they have high correlation with the correlation coefficient of 0.96. With a linear regression model, we can compensate ASTER-GDEM and improve its accuracy nearly to the Lidar-DEM with same scale. The correlation coefficient between FMI and CHM is 0.40. its accuracy is able to be improved by a linear regression model withinconfidence intervals of 95%. After compensation of ASTER-GDEM and FMI, we calculated the tree height in Mengla test site with DEM Differential Method. The results showed that the corrected ASTER-GDEM can effectively improve the assessment accuracy

  6. Stochastic Discrete Equation Method (sDEM) for two-phase flows

    SciTech Connect

    Abgrall, R.; Congedo, P.M.; Geraci, G.; Rodio, M.G.

    2015-10-15

    A new scheme for the numerical approximation of a five-equation model taking into account Uncertainty Quantification (UQ) is presented. In particular, the Discrete Equation Method (DEM) for the discretization of the five-equation model is modified for including a formulation based on the adaptive Semi-Intrusive (aSI) scheme, thus yielding a new intrusive scheme (sDEM) for simulating stochastic two-phase flows. Some reference test-cases are performed in order to demonstrate the convergence properties and the efficiency of the overall scheme. The propagation of initial conditions uncertainties is evaluated in terms of mean and variance of several thermodynamic properties of the two phases.

  7. An efficient method for DEM-based overland flow routing

    NASA Astrophysics Data System (ADS)

    Huang, Pin-Chun; Lee, Kwan Tun

    2013-05-01

    The digital elevation model (DEM) is frequently used to represent watershed topographic features based on a raster or a vector data format. It has been widely linked with flow routing equations for watershed runoff simulation. In this study, a recursive formulation was encoded into the conventional kinematic- and diffusion-wave routing algorithms to permit a larger time increment, despite the Courant-Friedrich-Lewy condition having been violated. To meet the requirement of recursive formulation, a novel routing sequence was developed to determine the cell-to-cell computational procedure for the DEM database. The routing sequence can be set either according to the grid elevation in descending order for the kinematic-wave routing or according to the water stage of the grid in descending order for the diffusion-wave routing. The recursive formulation for 1D runoff routing was first applied to a conceptual overland plane to demonstrate the precision of the formulation using an analytical solution for verification. The proposed novel routing sequence with the recursive formulation was then applied to two mountain watersheds for 2D runoff simulations. The results showed that the efficiency of the proposed method was significantly superior to that of the conventional algorithm, especially when applied to a steep watershed.

  8. Numerical Simulation of High Velocity Impact Phenomenon by the Distinct Element Method (dem)

    NASA Astrophysics Data System (ADS)

    Tsukahara, Y.; Matsuo, A.; Tanaka, K.

    2007-12-01

    Continuous-DEM (Distinct Element Method) for impact analysis is proposed in this paper. Continuous-DEM is based on DEM (Distinct Element Method) and the idea of the continuum theory. Numerical simulations of impacts between SUS 304 projectile and concrete target has been performed using the proposed method. The results agreed quantitatively with the impedance matching method. Experimental elastic-plastic behavior with compression and rarefaction wave under plate impact was also qualitatively reproduced, matching the result by AUTODYN®.

  9. A New DEM Generalization Method Based on Watershed and Tree Structure

    PubMed Central

    Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang

    2016-01-01

    The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296

  10. Evaluation of DEM generation accuracy from UAS imagery

    NASA Astrophysics Data System (ADS)

    Santise, M.; Fornari, M.; Forlani, G.; Roncella, R.

    2014-06-01

    The growing use of UAS platform for aerial photogrammetry comes with a new family of Computer Vision highly automated processing software expressly built to manage the peculiar characteristics of these blocks of images. It is of interest to photogrammetrist and professionals, therefore, to find out whether the image orientation and DSM generation methods implemented in such software are reliable and the DSMs and orthophotos are accurate. On a more general basis, it is interesting to figure out whether it is still worth applying the standard rules of aerial photogrammetry to the case of drones, achieving the same inner strength and the same accuracies as well. With such goals in mind, a test area has been set up at the University Campus in Parma. A large number of ground points has been measured on natural as well as signalized points, to provide a comprehensive test field, to check the accuracy performance of different UAS systems. In the test area, points both at ground-level and features on the buildings roofs were measured, in order to obtain a distributed support also altimetrically. Control points were set on different types of surfaces (buildings, asphalt, target, fields of grass and bumps); break lines, were also employed. The paper presents the results of a comparison between two different surveys for DEM (Digital Elevation Model) generation, performed at 70 m and 140 m flying height, using a Falcon 8 UAS.

  11. Open-Source Digital Elevation Model (DEMs) Evaluation with GPS and LiDAR Data

    NASA Astrophysics Data System (ADS)

    Khalid, N. F.; Din, A. H. M.; Omar, K. M.; Khanan, M. F. A.; Omar, A. H.; Hamid, A. I. A.; Pa'suya, M. F.

    2016-09-01

    Advanced Spaceborne Thermal Emission and Reflection Radiometer-Global Digital Elevation Model (ASTER GDEM), Shuttle Radar Topography Mission (SRTM), and Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) are freely available Digital Elevation Model (DEM) datasets for environmental modeling and studies. The quality of spatial resolution and vertical accuracy of the DEM data source has a great influence particularly on the accuracy specifically for inundation mapping. Most of the coastal inundation risk studies used the publicly available DEM to estimated the coastal inundation and associated damaged especially to human population based on the increment of sea level. In this study, the comparison between ground truth data from Global Positioning System (GPS) observation and DEM is done to evaluate the accuracy of each DEM. The vertical accuracy of SRTM shows better result against ASTER and GMTED10 with an RMSE of 6.054 m. On top of the accuracy, the correlation of DEM is identified with the high determination of coefficient of 0.912 for SRTM. For coastal zone area, DEMs based on airborne light detection and ranging (LiDAR) dataset was used as ground truth data relating to terrain height. In this case, the LiDAR DEM is compared against the new SRTM DEM after applying the scale factor. From the findings, the accuracy of the new DEM model from SRTM can be improved by applying scale factor. The result clearly shows that the value of RMSE exhibit slightly different when it reached 0.503 m. Hence, this new model is the most suitable and meets the accuracy requirement for coastal inundation risk assessment using open source data. The suitability of these datasets for further analysis on coastal management studies is vital to assess the potentially vulnerable areas caused by coastal inundation.

  12. A coupled DEM-CFD method for impulse wave modelling

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Utili, Stefano; Crosta, GiovanBattista

    2015-04-01

    Rockslides can be characterized by a rapid evolution, up to a possible transition into a rock avalanche, which can be associated with an almost instantaneous collapse and spreading. Different examples are available in the literature, but the Vajont rockslide is quite unique for its morphological and geological characteristics, as well as for the type of evolution and the availability of long term monitoring data. This study advocates the use of a DEM-CFD framework for the modelling of the generation of hydrodynamic waves due to the impact of a rapid moving rockslide or rock-debris avalanche. 3D DEM analyses in plane strain by a coupled DEM-CFD code were performed to simulate the rockslide from its onset to the impact with still water and the subsequent wave generation (Zhao et al., 2014). The physical response predicted is in broad agreement with the available observations. The numerical results are compared to those published in the literature and especially to Crosta et al. (2014). According to our results, the maximum computed run up amounts to ca. 120 m and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 m and 190 m respectively). In these simulations, the slope mass is considered permeable, such that the toe region of the slope can move submerged in the reservoir and the impulse water wave can also flow back into the slope mass. However, the upscaling of the grains size in the DEM model leads to an unrealistically high hydraulic conductivity of the model, such that only a small amount of water is splashed onto the northern bank of the Vajont valley. The use of high fluid viscosity and coarse grain model has shown the possibility to model more realistically both the slope and wave motions. However, more detailed slope and fluid properties, and the need for computational efficiency should be considered in future research work. This aspect has also been

  13. DEM-based Watershed Delineation - Comparison of Different Methods and applications

    NASA Astrophysics Data System (ADS)

    Chu, X.; Zhang, J.; Tahmasebi Nasab, M.

    2015-12-01

    Digital elevation models (DEMs) are commonly used for large-scale watershed hydrologic and water quality modeling. With aid of the latest LiDAR technology, submeter scale DEM data are often available for many areas in the United States. Precise characterization of the detailed variations in surface microtopography using such high-resolution DEMs is crucial to the related watershed modeling. Various methods have been developed to delineate a watershed, including determination of flow directions and accumulations, identification of subbasin boundaries, and calculation of the relevant topographic parameters. The objective of this study is to examine different DEM-based watershed delineation methods by comparing their unique features and the discrepancies in their results. Not only does this study cover the traditional watershed delineation methods, but also a new puddle-based unit (PBU) delineation method. The specific topics and issues to be presented involve flow directions (D8 single flow direction vs. multi-direction methods), segmentation of stream channels, drainage systems (single "depressionless" drainage network vs. hierarchical depression-dominated drainage system), and hydrologic connectivity (static structural connectivity vs. dynamic functional connectivity). A variety of real topographic surfaces are selected and delineated by using the selected methods. Comparisons of their delineation results emphasize the importance of selection of the methods and highlight their applicability and potential impacts on watershed modeling.

  14. Flow Dynamics of green sand in the DISAMATIC moulding process using Discrete element method (DEM)

    NASA Astrophysics Data System (ADS)

    Hovad, E.; Larsen, P.; Walther, J. H.; Thorborg, J.; Hattel, J. H.

    2015-06-01

    The DISAMATIC casting process production of sand moulds is simulated with DEM (discrete element method). The main purpose is to simulate the dynamics of the flow of green sand, during the production of the sand mould with DEM. The sand shot is simulated, which is the first stage of the DISAMATIC casting process. Depending on the actual casting geometry the mould can be geometrically quite complex involving e.g. shadowing effects and this is directly reflected in the sand flow during the moulding process. In the present work a mould chamber with “ribs” at the walls is chosen as a baseline geometry to emulate some of these important conditions found in the real moulding process. The sand flow is simulated with the DEM and compared with corresponding video footages from the interior of the chamber during the moulding process. The effect of the rolling resistance and the static friction coefficient is analysed and discussed in relation to the experimental findings.

  15. Discrete Element Method (DEM) Application to The Cone Penetration Test Using COUPi Model

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A. V.; Johnson, J.; Wilkinson, A.; DeGennaro, A. J.; Duvoy, P.

    2011-12-01

    The cone penetration test (CPT) is a soil strength measurement method to determine the tip resistance and sleeve friction versus depth while pushing a cone into regolith with controlled slow quasi-static speed. This test can also be used as an excellent tool to validate the discrete element method (DEM) model by comparing tip resistance and sleeve friction from experiments to model results. DEM by nature requires significant computational resources even for a limited number of particles. Thus, it is important to find particle and ensemble parameters that produce valuable results within reasonable computation times. The Controllable Objects Unbounded Particles Interaction (COUPi) model is a general physical DEM code being developed to model machine/regolith interactions as part of a NASA Lunar Science Institute sponsored project on excavation and mobility modeling. In this work, we consider how different particle shape and size distributions defined in the DEM influence the cone tip and friction sleeve resistance in a CPT DEM simulation. The results are compared to experiments with cone penetration in JSC-1A lunar regolith simulant. The particle shapes include spherical particles, particles composed from the union of three spheres, and some simple polyhedra. This focus is driven by the soil mechanics rule of thumb that particle size and shape distributions are the two most significant factors affecting soil strength. In addition to the particle properties, the packing configuration of an ensemble strongly affects soil strength. Bulk density of the regolith is an important characteristic that significantly influences the tip resistance and sleeve friction (Figure 1). We discuss different approaches used to control granular density in the DEM, including how to obtain higher bulk densities, using numerical "shaking" techniques and varying the friction coefficient during computations.

  16. A Review of Discrete Element Method (DEM) Particle Shapes and Size Distributions for Lunar Soil

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Metzger, Philip T.; Wilkinson, R. Allen

    2010-01-01

    As part of ongoing efforts to develop models of lunar soil mechanics, this report reviews two topics that are important to discrete element method (DEM) modeling the behavior of soils (such as lunar soils): (1) methods of modeling particle shapes and (2) analytical representations of particle size distribution. The choice of particle shape complexity is driven primarily by opposing tradeoffs with total number of particles, computer memory, and total simulation computer processing time. The choice is also dependent on available DEM software capabilities. For example, PFC2D/PFC3D and EDEM support clustering of spheres; MIMES incorporates superquadric particle shapes; and BLOKS3D provides polyhedra shapes. Most commercial and custom DEM software supports some type of complex particle shape beyond the standard sphere. Convex polyhedra, clusters of spheres and single parametric particle shapes such as the ellipsoid, polyellipsoid, and superquadric, are all motivated by the desire to introduce asymmetry into the particle shape, as well as edges and corners, in order to better simulate actual granular particle shapes and behavior. An empirical particle size distribution (PSD) formula is shown to fit desert sand data from Bagnold. Particle size data of JSC-1a obtained from a fine particle analyzer at the NASA Kennedy Space Center is also fitted to a similar empirical PSD function.

  17. Dem Extraction from WORLDVIEW-3 Stereo-Images and Accuracy Evaluation

    NASA Astrophysics Data System (ADS)

    Hu, F.; Gao, X. M.; Li, G. Y.; Li, M.

    2016-06-01

    This paper validates the potentials of Worldview-3 satellite images in large scale topographic mapping, by choosing Worldview-3 along-track stereo-images of Yi Mountain area in Shandong province China for DEM extraction and accuracy evaluation. Firstly, eighteen accurate and evenly-distributed GPS points are collected in field and used as GCPs/check points, the image points of which are accurately measured, and also tie points are extracted from image matching; then, the RFM-based block adjustment to compensate the systematic error in image orientation is carried out and the geo-positioning accuracy is calculated and analysed; next, for the two stereo-pairs of the block, DSMs are separately constructed and mosaicked as an entirety, and also the corresponding DEM is subsequently generated; finally, compared with the selected check points from high-precision airborne LiDAR point cloud covering the same test area, the accuracy of the generated DEM with 2-meter grid spacing is evaluated by the maximum (max.), minimum (min.), mean and standard deviation (std.) values of elevation biases. It is demonstrated that, for Worldview-3 stereo-images used in our research, the planimetric accuracy without GCPs is about 2.16 m (mean error) and 0.55 (std. error), which is superior to the nominal value, while the vertical accuracy is about -1.61 m (mean error) and 0.49 m (std. error); with a small amount of GCPs located in the center and four corners of the test area, the systematic error can be well compensated. The std. value of elevation biases between the generated DEM and the 7256 LiDAR check points are about 0.62 m. If considering the potential uncertainties in the image point measurement, stereo matching and also elevation editing, the accuracy of generating DEM from Worldview-3 stereo-images should be more desirable. Judging from the results, Worldview-3 has the potential for 1:5000 or even larger scale mapping application.

  18. Use of thermal infrared pictures for retrieving intertidal DEM by the waterline method: advantages and limitations

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Delacourt, C.; Allemand, P.

    2010-12-01

    Digital Elevation Models (DEM) of the intertidal zones have a growing interest for ecological and land development purposes. They are also a fundamental tool for monitoring current sedimentary movements in those low energy environments. Such DEMs have to be constructed with a centimetric resolution as the topographic changes are not predictable and as sediment displacements are weak. Direct construction of DEM by GPS in these muddy environment is difficult: photogrammetric techniques are not efficient on uniform coloured surfaces and terrestrial laser scans are difficult to stabilize on the mud, due to humidity. In this study, we propose to improve and to apply the waterline method to retrieve DEMs in intertidal zones. This technique is based on monitoring accurately the boundary between sand and water during a whole tide rise with thermal infrared images. The DEM is made by stacking all these lines calibrated by an immersed pressure sensor. Using thermal infrared pictures, instead of optical ones, improves the detection of the waterline, since mud and water have very different responses to sun heating and a large emissivity contrast. However, temperature retrieving from thermal infrared data is not trivial, since the luminance of an object is the sum of a radiative part and a reflexive part, whose relative proportions are given by the emissivity. In the following equation, B accounts for the equivalent blackbody luminance, and Linc is the incident luminance : Ltot}=L{rad}+L_{refl=ɛ B+(1-ɛ )Linc The infrared waterline technique has been used for the monitoring of a beach located on the Aber Benoit, 8.5km away from the open sea. The site is mainly constituted of mud, and waves are very small (less than one centimeter high), which are the ideal conditions for using the waterline method. A few measurements have been made to make differential heigh maps of sediments. We reached a mean resolution of 2cm and a vertical accuracy better than one centimeter. The results

  19. Combined DEM Extration Method from StereoSAR and InSAR

    NASA Astrophysics Data System (ADS)

    Zhao, Z.; Zhang, J. X.; Duan, M. Y.; Huang, G. M.; Yang, S. C.

    2015-06-01

    A pair of SAR images acquired from different positions can be used to generate digital elevation model (DEM). Two techniques exploiting this characteristic have been introduced: stereo SAR and interferometric SAR. They permit to recover the third dimension (topography) and, at the same time, to identify the absolute position (geolocation) of pixels included in the imaged area, thus allowing the generation of DEMs. In this paper, StereoSAR and InSAR combined adjustment model are constructed, and unify DEM extraction from InSAR and StereoSAR into the same coordinate system, and then improve three dimensional positioning accuracy of the target. We assume that there are four images 1, 2, 3 and 4. One pair of SAR images 1,2 meet the required conditions for InSAR technology, while the other pair of SAR images 3,4 can form stereo image pairs. The phase model is based on InSAR rigorous imaging geometric model. The master image 1 and the slave image 2 will be used in InSAR processing, but the slave image 2 is only used in the course of establishment, and the pixels of the slave image 2 are relevant to the corresponding pixels of the master image 1 through image coregistration coefficient, and it calculates the corresponding phase. It doesn't require the slave image in the construction of the phase model. In Range-Doppler (RD) model, the range equation and Doppler equation are a function of target geolocation, while in the phase equation, the phase is also a function of target geolocation. We exploit combined adjustment model to deviation of target geolocation, thus the problem of target solution is changed to solve three unkonwns through seven equations. The model was tested for DEM extraction under spaceborne InSAR and StereoSAR data and compared with InSAR and StereoSAR methods respectively. The results showed that the model delivered a better performance on experimental imagery and can be used for DEM extraction applications.

  20. Research on the method of extracting DEM based on GBInSAR

    NASA Astrophysics Data System (ADS)

    Yue, Jianping; Yue, Shun; Qiu, Zhiwei; Wang, Xueqin; Guo, Leping

    2016-05-01

    Precise topographical information has a very important role in geology, hydrology, natural resources survey and deformation monitoring. The extracting DEM technology based on synthetic aperture radar interferometry (InSAR) obtains the three-dimensional elevation of the target area through the phase information of the radar image data. The technology has large-scale, high-precision, all-weather features. By changing track in the location of the ground radar system up and down, it can form spatial baseline. Then we can achieve the DEM of the target area by acquiring image data from different angles. Three-dimensional laser scanning technology can quickly, efficiently and accurately obtain DEM of target area, which can verify the accuracy of DEM extracted by GBInSAR. But research on GBInSAR in extracting DEM of the target area is a little. For lack of theory and lower accuracy problems in extracting DEM based on GBInSAR now, this article conducted research and analysis on its principle deeply. The article extracted the DEM of the target area, combined with GBInSAR data. Then it compared the DEM obtained by GBInSAR with the DEM obtained by three-dimensional laser scan data and made statistical analysis and normal distribution test. The results showed the DEM obtained by GBInSAR was broadly consistent with the DEM obtained by three-dimensional laser scanning. And its accuracy is high. The difference of both DEM approximately obeys normal distribution. It indicated that extracting the DEM of target area based on GBInSAR is feasible and provided the foundation for the promotion and application of GBInSAR.

  1. A Method for Improving SRTM DEMs in High-Relief Terrain

    NASA Astrophysics Data System (ADS)

    Falorni, G.; Istanbulluoglu, E.; Bras, R. L.

    2003-12-01

    The Shuttle Radar Topography Mission (SRTM) had the objective of mapping the Earth's surface between 56 o S and 60 o N to produce the first near-global high resolution digital elevation model (DEM). The dataset, with a horizontal resolution of 1 arc second ( ˜ 30 m), has now been released for the conterminous U.S. Recent investigations aimed at assessing the vertical accuracy of the dataset have revealed that elevation accuracy is well within dataset specifications in areas of low- to modest-relief but that errors generally increase in high-relief terrains. Statistical analyses performed with the objective of characterizing the error structure in two study sites within the U.S. have highlighted the existence of correlations between elevation residuals and slope gradient, slope bearing and elevation. In particular, the analyses show that the largest errors occur on steep slopes and that slope bearing has a marked influence on the sign of the elevation residuals. Based on these findings we are currently investigating a method for correcting relevant vertical errors in SRTM-derived DEMs according to their topographic location. We propose to use a combination of indices derived from the statistical analyses to predict the occurrence, magnitude and sign of the vertical errors.

  2. TanDEM-X Mission: Overview and Evaluation of intermediate Results

    NASA Astrophysics Data System (ADS)

    Soergel, U.; Jacobsen, K.; Schack, L.

    2013-10-01

    The German Aerospace Center (DLR, Deutsches Zentrum für Luft- und Raumfahrt) currently conducts the bistatic interferometric synthetic aperture radar (SAR) Mission TanDEM-X, which shall result in a DEM of global coverage in an unprecedented resolution and accuracy according to DTED level 3 standard. The mission is based on the two SAR satellites TerraSAR-X and TanDEM-X that have been launched in June 2007 and 2010, respectively. After the commissioning phase of TanDEM satellite and the orbital adjustment the bistatic image acquisition in close formation began end of 2010. The data collection for the mission is scheduled to last about three years, i.e., the bigger part of the required data have been already gathered. Based on this data DLR will conduct several processing steps in order to come up finally with a global and seamless DEM of the Earth's landmass which shall meet the envisaged specifications. Since the entire mission is an endeavor in the framework of a private-public-partnership, the private partner, Astrium, will eventually commercialize the DEM product. In this paper, we will provide an overview of the data collection and the deliverables that will come along with TanDEM-X mission. Furthermore, we will analyze a DEM derived from early stage immediate products of the mission.

  3. High-resolution Pleiades DEMs and improved mapping methods for the E-Corinth marine terraces

    NASA Astrophysics Data System (ADS)

    de Gelder, Giovanni; Fernández-Blanco, David; Delorme, Arthur; Jara-Muñoz, Julius; Melnick, Daniel; Lacassin, Robin; Armijo, Rolando

    2016-04-01

    The newest generation of satellite imagery provides exciting new possibilities for highly detailed mapping, with ground resolution of sub-metric pixels and absolute accuracy within a few meters. This opens new venues for the analysis of geologic and geomorphic landscape features, especially since photogrammetric methods allow the extraction of detailed topographic information from these satellite images. We used tri-stereo imagery from the Pleiades platform of the CNES in combination with Euclidium software for image orientation, and Micmac software for dense matching, to develop state-of-the-art, 2m-resolution digital elevation models (DEMs) for eight areas in Greece. Here, we present our mapping results for an area in the eastern Gulf of Corinth, which contains one of the most extensive and well-preserved flights of marine terraces world-wide. The spatial extent of the terraces has been determined by an iterative combination of an automated surface classification model for terrain slope and roughness, and qualitative assessment of satellite imagery, DEM hillshade maps, slope maps, as well as detailed topographic analyses of profiles and contours. We determined marine terrace shoreline angles by means of swath profiles that run perpendicularly to the paleo-seacliffs, using the graphical interface TerraceM. Our analysis provided us with a minimum and maximum estimate of the paleoshoreline location on ~750 swath profiles, by using the present-day cliff slope as an approximation for its paleo-cliff counterpart. After correlating the marine terraces laterally we obtained 16 different terrace-levels, recording Quaternary sea-level highstands of both major interglacial and several interstadial periods. Our high-resolution Pleiades-DEMs and improved method for paleoshoreline determination allowed us to produce a marine terrace map of unprecedented detail, containing more terrace sub-levels than hitherto. Our mapping demonstrates that we are no longer limited by the

  4. Mechanical behavior modeling of sand-rubber chips mixtures using discrete element method (DEM)

    NASA Astrophysics Data System (ADS)

    Eidgahee, Danial Rezazadeh; Hosseininia, Ehsan Seyedi

    2013-06-01

    Rubber shreds in mixture with sandy soils are widely used in geotechnical purposes due to their specific controlled compressibility characteristics and light weight. Various studies have been carried out for sand or rubber chips content in order to restrain the compressibility of the mass in different structures such as backfills, road embankments, etc. Considering different rubber contents, sand-rubber mixtures can be made which lead mechanical properties of the blend to go through changes. The aim of this paper is to study the effect of adding different rubber portions on the global engineering properties of the mixtures. This study is performed by using Discrete Element Method (DEM). The simulations showed that adding rubber up to a particular fraction can improve maximum bearing stress characteristics comparing to sand alone masses. Taking the difference between sand and rubber stiffness into account, the result interpretation can be developed to other soft and rigid particle mixtures such as powders or polymers.

  5. Evaluation of TanDEM-X elevation data for geomorphological mapping and interpretation in high mountain environments - A case study from SE Tibet, China

    NASA Astrophysics Data System (ADS)

    Pipaud, Isabel; Loibl, David; Lehmkuhl, Frank

    2015-10-01

    Digital elevation models (DEMs) are a prerequisite for many different applications in the field of geomorphology. In this context, the two near-global medium resolution DEMs originating from the SRTM and ASTER missions are widely used. For detailed geomorphological studies, particularly in high mountain environments, these datasets are, however, known to have substantial disadvantages beyond their posting, i.e., data gaps and miscellaneous artifacts. The upcoming TanDEM-X DEM is a promising candidate to improve this situation by application of state-of-the-art radar technology, exhibiting a posting of 12 m and less proneness to errors. In this study, we present a DEM processed from a single TanDEM-X CoSSC scene, covering a study area in the extreme relief of the eastern Nyainqêntanglha Range, southeastern Tibet. The potential of the resulting experimental TanDEM-X DEM for geomorphological applications was evaluated by geomorphometric analyses and an assessment of landform cognoscibility and artifacts in comparison to the ASTER GDEM and the recently released SRTM 1″ DEM. Detailed geomorphological mapping was conducted for four selected core study areas in a manual approach, based exclusively on the TanDEM-X DEM and its basic derivates. The results show that the self-processed TanDEM-X DEM yields a detailed and widely consistent landscape representation. It thus fosters geomorphological analysis by visual and quantitative means, allowing delineation of landforms down to footprints of 30 m. Even in this premature state, the TanDEM-X elevation data are widely superior to the ASTER and SRTM datasets, primarily owing to its significantly higher resolution and its lower susceptibility to artifacts that hamper landform interpretation. Conversely, challenges toward interferometric DEM generation were identified, including (i) triangulation facets and missing topographic information resulting from radar layover on steep slopes facing toward the radar sensor, (ii) low

  6. Shoreline Mapping with Integrated HSI-DEM using Active Contour Method

    NASA Astrophysics Data System (ADS)

    Sukcharoenpong, Anuchit

    Shoreline mapping has been a critical task for federal/state agencies and coastal communities. It supports important applications such as nautical charting, coastal zone management, and legal boundary determination. Current attempts to incorporate data from hyperspectral imagery to increase the efficiency and efficacy of shoreline mapping have been limited due to the complexity in processing its data as well as its inferior spatial resolution when compared to multispectral imagery or to sensors such as LiDAR. As advancements in remote-sensing technologies increase sensor capabilities, the ability to exploit the spectral formation carried in hyperspectral images becomes more imperative. This work employs a new approach to extracting shorelines from AVIRIS hyperspectral images by combination with a LiDAR-based DEM using a multiphase active contour segmentation technique. Several techniques, such as study of object spectra and knowledge-based segmentation for initial contour generation, have been employed in order to achieve a sub-pixel level of accuracy and maintain low computational expenses. Introducing a DEM into hyperspectral image segmentation proves to be a useful tool to eliminate misclassifications and improve shoreline positional accuracy. Experimental results show that mapping shorelines from hyperspectral imagery and a DEM can be a promising approach as many further applications can be developed to exploit the rich information found in hyperspectral imagery.

  7. DEM generated from InSAR in mountainous terrain and its accuracy analysis

    NASA Astrophysics Data System (ADS)

    Hu, Hongbing; Zhan, Yulan

    2011-02-01

    Digital Elevation Model (DEM) derived from survey data is accurate but it is very expensive and time-consuming. In recent years, remote sensing techniques including Synthetic Apenture Radar Interferometry (InSAR) had been developed as a powerful method to derive high precision DEM, especially in mountainous or deep forest areas. The purpose of this paper is to illustrate the principle of InSAR and show the result of a case study in Gejiu city, Yunnan province, China. The accuracy of DEM derived from InSAR (abbreviation as InSAR-DEM) is also evaluated by comparing it with DEM generated from topographic map at the scale of 1:50000 (abbreviation as TOP-DEM). The result shows that: (1)The general precision of the whole selected area acquired by subtracting InSAR-DEM from TOP-DEM is that the maximum, the minimum, the RMSE, and the mean of difference of the two DEMs are 203m, -188m, 26.9m and 5.7m respectively. (2)The topographic trend represented by the two DEMs is coincident, even though TOP-DEM is finer than InSAR-DEM, especial at the valley. (3) Contour maps with the interval of 100m and 50m converted from InSAR-DEM and TOP-DEM respectively show accordant relief trend. Contour from TOP-DEM is smoother than that of from InSAR-DEM, while Contour from InSAR-DEM has more islands than that of from TOP-DEM.(4) Coherence has great influence on the precision of InSAR-DEM, the precision of low-coherence area approaches 100 m while that of high-coherence area can up to m level. (5) The relief trend of 6 profiles represented by InSAR-DEM and TOP-DEM is accordant with tiny difference in partial minutiae. InSAR-DEM displays hypsographies at relative flat areas including surface of water, which reflects the influence of flat earth on InSAR to a certain extent.

  8. High-resolution DEMs in the study of rainfall- and earthquake-induced landslides: Use of a variable window size method in digital terrain analysis

    NASA Astrophysics Data System (ADS)

    Iwahashi, Junko; Kamiya, Izumi; Yamagishi, Hiromitsu

    2012-06-01

    We undertake digital terrain analyses of rainfall- and earthquake-induced landslides in Japan, using high-resolution orthoimagery and Light Detection and Ranging (LiDAR) DEMs. Our aims are twofold: to demonstrate an effective method for dealing with high-resolution DEMs, which are often too detailed for landslide assessments, and to evaluate the topographic differences between rainfall- and earthquake-induced landslides. The study areas include the Izumozaki (1961 and 2004 heavy rainfalls), Niihama (2004 heavy rainfalls), Houfu (2009 heavy rainfalls), and Hanokidachi/Kurikoma-dam regions (the 2008 M 7.2 Iwate-Miyagi Nairiku earthquake). The study areas include 7,106 landslides in these five regions. We use two topographic attributes (the slope gradient and the Laplacian) calculated from DEMs in varying window sizes. The hit rates for statistical prediction of landslide cells through discriminant analyses are calculated using the two topographic attributes as explanatory variables, and the landslide inventory data as the dependent variable. In cases of surface failure, the hit rates are found to diminish when the window size of the topographic attributes is too large or too small, indicating that an optimal scale factor is key in assessing shallow landslides. The representative window sizes are approximately 30 m for shallow landslides; the optimal window size may be directly related to the average size of landslides in each region. We also find a stark contrast between rainfall- and earthquake-induced landslides. Rainfall-induced landslides are always most common at a slope gradient of 30°, but the frequency of earthquake-induced landslides increases exponentially with slope gradient. We find that the Laplacian, i.e., the attributes of surface convexity and concavity, and the slope gradient are both important factors for rainfall-induced landslides, whereas earthquake-induced landslides are influenced mainly by slope steepness.

  9. Evaluation of the performance of the cross-flow air classifier in manufactured sand processing via CFD-DEM simulations

    NASA Astrophysics Data System (ADS)

    Petit, H. A.; Irassar, E. F.; Barbosa, M. R.

    2017-03-01

    Manufactured sands are particulate materials obtained as by product of rock crushing. Particle sizes in the sand can be as high as 6 mm and as low as a few microns. The concrete industry has been increasingly using these sands as fine aggregates to replace natural sands. The main shortcoming is the excess of particles smaller than <0.075 mm (Dust). This problem has been traditionally solved by a washing process. Air classification is being studied to replace the washing process and avoid the use of water. The complex classification process can only been understood with the aid of CFD-DEM simulations. This paper evaluates the applicability of a cross-flow air classifier to reduce the amount of dust in manufactured sands. Computational fluid dynamics (CFD) and discrete element modelling (DEM) were used for the assessment. Results show that the correct classification set up improves the size distribution of the raw materials. The cross-flow air classification is found to be influenced by the particle size distribution and the turbulence inside the chamber. The classifier can be re-designed to work at low inlet velocities to produce manufactured sand for the concrete industry.

  10. The influence of accuracy, grid size, and interpolation method on the hydrological analysis of LiDAR derived dems: Seneca Nation of Indians, Irving NY

    NASA Astrophysics Data System (ADS)

    Clarkson, Brian W.

    Light Detection and Ranging (LiDAR) derived Digital Elevation Models (DEMs) provide accurate, high resolution digital surfaces for precise topographic analysis. The following study investigates the accuracy of LiDAR derived DEMs by calculating the Root Mean Square Error (RMSE) of multiple interpolation methods with grid cells ranging from 0.5 to 10-meters. A raster cell with smaller dimensions will drastically increase the amount of detail represented in the DEM by increasing the number of elevation values across the study area. Increased horizontal resolutions have raised the accuracy of the interpolated surfaces and the contours generated from the digitized landscapes. As the raster grid cells decrease in size, the level of detail of hydrological processes will significantly improve compared to coarser resolutions including the publicly available National Elevation Datasets (NEDs). Utilizing a LiDAR derived DEM with the lowest RMSE as the 'ground truth', watershed boundaries were delineated for a sub-basin of the Clear Creek Watershed within the territory of the Seneca Nation of Indians located in Southern Erie County, NY. An investigation of the watershed area and boundary location revealed considerable differences comparing the results of applying different interpretation methods on DEM datasets of different horizontal resolutions. Stream networks coupled with watersheds were used to calculate peak flow values for the 10-meter NEDs and LiDAR derived DEMs.

  11. Evaluation of ASTER and SRTM DEM data for lahar modeling: A case study on lahars from Popocatépetl Volcano, Mexico

    NASA Astrophysics Data System (ADS)

    Huggel, C.; Schneider, D.; Miranda, P. Julio; Delgado Granados, H.; Kääb, A.

    2008-02-01

    Lahars are among the most serious and far-reaching volcanic hazards. In regions with potential interactions of lahars with populated areas and human structures the assessment of the related hazards is crucial for undertaking appropriate mitigating actions and reduce the associated risks. Modeling of lahars has become an important tool in such assessments, in particular where the geologic record of past events is insufficient. Mass-flow modeling strongly relies on digital terrain data. Availability of digital elevation models (DEMs), however, is often limited and thus an obstacle to lahar modeling. Remote-sensing technology has now opened new perspectives in generating DEMs. In this study, we evaluate the feasibility of DEMs derived from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Shuttle Radar Topography Mission (SRTM) for lahar modeling on Popocatépetl Volcano, Mexico. Two GIS-based models are used for lahar modeling, LAHARZ and a flow-routing-based debris-flow model (modified single-flow direction model, MSF), both predicting areas potentially affected by lahars. Results of the lahar modeling show that both the ASTER and SRTM DEMs are basically suitable for use with LAHARZ and MSF. Flow-path prediction is found to be more reliable with SRTM data, though with a coarser spatial resolution. Errors of the ASTER DEM affecting the prediction of flow paths due to the sensor geometry are associated with deeply incised gorges with north-facing slopes. LAHARZ is more sensitive to errors of the ASTER DEM than the MSF model. Lahar modeling with the ASTER DEM results in a more finely spaced predicted inundation area but does not add any significant information in comparison with the SRTM DEM. Lahars at Popocatépetl are modeled with volumes of 1 × 10 5 to 8 × 10 6 m 3 based on ice-melt scenarios of the glaciers on top of the volcano and data on recent and historical lahar events. As regards recently observed lahars, the travel

  12. ASTER DEM performance

    USGS Publications Warehouse

    Fujisada, H.; Bailey, G.B.; Kelly, Glen G.; Hara, S.; Abrams, M.J.

    2005-01-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the National Aeronautics and Space Administration's Terra spacecraft has an along-track stereoscopic capability using its a near-infrared spectral band to acquire the stereo data. ASTER has two telescopes, one for nadir-viewing and another for backward-viewing, with a base-to-height ratio of 0.6. The spatial resolution is 15 m in the horizontal plane. Parameters such as the line-of-sight vectors and the pointing axis were adjusted during the initial operation period to generate Level-1 data products with a high-quality stereo system performance. The evaluation of the digital elevation model (DEM) data was carried out both by Japanese and U.S. science teams separately using different DEM generation software and reference databases. The vertical accuracy of the DEM data generated from the Level-1A data is 20 m with 95% confidence without ground control point (GCP) correction for individual scenes. Geolocation accuracy that is important for the DEM datasets is better than 50 m. This appears to be limited by the spacecraft position accuracy. In addition, a slight increase in accuracy is observed by using GCPs to generate the stereo data.

  13. A hierarchical pyramid method for managing large-scale high-resolution drainage networks extracted from DEM

    NASA Astrophysics Data System (ADS)

    Bai, Rui; Tiejian, Li; Huang, Yuefei; Jiaye, Li; Wang, Guangqian; Yin, Dongqin

    2015-12-01

    The increasing resolution of Digital Elevation Models (DEMs) and the development of drainage network extraction algorithms make it possible to develop high-resolution drainage networks for large river basins. These vector networks contain massive numbers of river reaches with associated geographical features, including topological connections and topographical parameters. These features create challenges for efficient map display and data management. Of particular interest are the requirements of data management for multi-scale hydrological simulations using multi-resolution river networks. In this paper, a hierarchical pyramid method is proposed, which generates coarsened vector drainage networks from the originals iteratively. The method is based on the Horton-Strahler's (H-S) order schema. At each coarsening step, the river reaches with the lowest H-S order are pruned, and their related sub-basins are merged. At the same time, the topological connections and topographical parameters of each coarsened drainage network are inherited from the former level using formulas that are presented in this study. The method was applied to the original drainage networks of a watershed in the Huangfuchuan River basin extracted from a 1-m-resolution airborne LiDAR DEM and applied to the full Yangtze River basin in China, which was extracted from a 30-m-resolution ASTER GDEM. In addition, a map-display and parameter-query web service was published for the Mississippi River basin, and its data were extracted from the 30-m-resolution ASTER GDEM. The results presented in this study indicate that the developed method can effectively manage and display massive amounts of drainage network data and can facilitate multi-scale hydrological simulations.

  14. Evaluating DEM conditioning techniques, elevation source data, and grid resolution for field-scale hydrological parameter extraction

    NASA Astrophysics Data System (ADS)

    Woodrow, Kathryn; Lindsay, John B.; Berg, Aaron A.

    2016-09-01

    Although digital elevation models (DEMs) prove useful for a number of hydrological applications, they are often the end result of numerous processing steps that each contains uncertainty. These uncertainties have the potential to greatly influence DEM quality and to further propagate to DEM-derived attributes including derived surface and near-surface drainage patterns. This research examines the impacts of DEM grid resolution, elevation source data, and conditioning techniques on the spatial and statistical distribution of field-scale hydrological attributes for a 12,000 ha watershed of an agricultural area within southwestern Ontario, Canada. Three conditioning techniques, including depression filling (DF), depression breaching (DB), and stream burning (SB), were examined. The catchments draining to each boundary of 7933 agricultural fields were delineated using the surface drainage patterns modeled from LiDAR data, interpolated to a 1 m, 5 m, and 10 m resolution DEMs, and from a 10 m resolution photogrammetric DEM. The results showed that variation in DEM grid resolution resulted in significant differences in the spatial and statistical distributions of contributing areas and the distributions of downslope flowpath length. Degrading the grid resolution of the LiDAR data from 1 m to 10 m resulted in a disagreement in mapped contributing areas of between 29.4% and 37.3% of the study area, depending on the DEM conditioning technique. The disagreements among the field-scale contributing areas mapped from the 10 m LiDAR DEM and photogrammetric DEM were large, with nearly half of the study area draining to alternate field boundaries. Differences in derived contributing areas and flowpaths among various conditioning techniques increased substantially at finer grid resolutions, with the largest disagreement among mapped contributing areas occurring between the 1 m resolution DB DEM and the SB DEM (37% disagreement) and the DB-DF comparison (36.5% disagreement in mapped

  15. The evaluation of unmanned aerial system-based photogrammetry and terrestrial laser scanning to generate DEMs of agricultural watersheds

    NASA Astrophysics Data System (ADS)

    Ouédraogo, Mohamar Moussa; Degré, Aurore; Debouche, Charles; Lisein, Jonathan

    2014-06-01

    Agricultural watersheds tend to be places of intensive farming activities that permanently modify their microtopography. The surface characteristics of the soil vary depending on the crops that are cultivated in these areas. Agricultural soil microtopography plays an important role in the quantification of runoff and sediment transport because the presence of crops, crop residues, furrows and ridges may impact the direction of water flow. To better assess such phenomena, 3-D reconstructions of high-resolution agricultural watershed topography are essential. Fine-resolution topographic data collection technologies can be used to discern highly detailed elevation variability in these areas. Knowledge of the strengths and weaknesses of existing technologies used for data collection on agricultural watersheds may be helpful in choosing an appropriate technology. This study assesses the suitability of terrestrial laser scanning (TLS) and unmanned aerial system (UAS) photogrammetry for collecting the fine-resolution topographic data required to generate accurate, high-resolution digital elevation models (DEMs) in a small watershed area (12 ha). Because of farming activity, 14 TLS scans (≈ 25 points m- 2) were collected without using high-definition surveying (HDS) targets, which are generally used to mesh adjacent scans. To evaluate the accuracy of the DEMs created from the TLS scan data, 1098 ground control points (GCPs) were surveyed using a real time kinematic global positioning system (RTK-GPS). Linear regressions were then applied to each DEM to remove vertical errors from the TLS point elevations, errors caused by the non-perpendicularity of the scanner's vertical axis to the local horizontal plane, and errors correlated with the distance to the scanner's position. The scans were then meshed to generate a DEMTLS with a 1 × 1 m spatial resolution. The Agisoft PhotoScan and MicMac software packages were used to process the aerial photographs and generate a DEMPSC

  16. Numerical slope stability simulations of chasma walls in Valles Marineris/Mars using a distinct element method (dem).

    NASA Astrophysics Data System (ADS)

    Imre, B.

    2003-04-01

    NUMERICAL SLOPE STABILITY SIMULATIONS OF CHASMA WALLS IN VALLES MARINERIS/MARS USING A DISTINCT ELEMENT METHOD (DEM). B. Imre (1) (1) German Aerospace Center, Berlin Adlershof, bernd.imre@gmx.net The 8- to 10-km depths of Valles Marineris (VM) offer excellent views into the upper Martian crust. Layering, fracturing, lithology, stratigraphy and the content of volatiles have influenced the evolution of the Valles Marineris wallslopes. But these parameters also reflect the development of VM and its wall slopes. The scope of this work is to gain understanding in these parameters by back-simulating the development of wall slopes. For that purpose, the two dimensional Particle Flow Code PFC2D has been chosen (ITASCA, version 2.00-103). PFC2D is a distinct element code for numerical modelling of movements and interactions of assemblies of arbitrarily sized circular particles. Particles may be bonded together to represent a solid material. Movements of particles are unlimited. That is of importance because results of open systems with numerous unknown variables are non-unique and therefore highly path dependent. This DEM allows the simulation of whole development paths of VM walls what makes confirmation of the model more complete (e.g. Oreskes et al., Science 263, 1994). To reduce the number of unknown variables a proper (that means as simple as possible) field-site had to be selected. The northern wall of eastern Candor Chasma has been chosen. This wall is up to 8-km high and represents a significant outcrop of the upper Martian crust. It is quite uncomplex, well-aligned and of simple morphology. Currently the work on the model is at the stage of performing the parameter study. Results will be presented via poster by the EGS-Meeting.

  17. Construction of lunar DEMs based on reflectance modelling

    NASA Astrophysics Data System (ADS)

    Grumpe, Arne; Belkhir, Fethi; Wöhler, Christian

    2014-06-01

    Existing lunar DEMs obtained based on laser altimetry or photogrammetric image analysis are characterised by high large-scale accuracies while their lateral resolution is strongly limited by noise or interpolation artifacts. In contrast, image-based photometric surface reconstruction approaches reveal small-scale surface detail but become inaccurate on large spatial scales. The framework proposed in this study therefore combines photometric image information of high lateral resolution and DEM data of comparably low lateral resolution in order to obtain DEMs of high lateral resolution which are also accurate on large spatial scales. Our first approach combines an extended photoclinometry scheme and a shape from shading based method. A novel variational surface reconstruction method further increases the lateral resolution of the DEM such that it reaches that of the underlying images. We employ the Hapke IMSA and AMSA reflectance models with two different formulations of the single-particle scattering function, such that the single-scattering albedo of the surface particles and optionally the asymmetry parameter of the single-particle scattering function can be estimated pixel-wise. As our DEM construction methods require co-registered images, an illumination-independent image registration scheme is developed. An evaluation of our framework based on synthetic image data yields an average elevation accuracy of the constructed DEMs of better than 20 m as long as the correct reflectance model is assumed. When comparing our DEMs to LOLA single track data, absolute elevation accuracies around 30 m are obtained for test regions that cover an elevation range of several thousands of metres. The proposed illumination-independent image registration method yields subpixel accuracy even in the presence of 3D perspective distortions. The pixel-wise reflectance parameters estimated simultaneously with the DEM reflect compositional contrasts between different surface units

  18. Topographic changes due to the 2008 Mw 7.9 Wenchuan earthquake as revealed by the differential DEM method

    NASA Astrophysics Data System (ADS)

    Ren, Zhikun; Zhang, Zhuqi; Dai, Fuchu; Yin, Jinhui; Zhang, Huiping

    2014-07-01

    Landscape evolution in active orogenic regions is inevitably affected by the repeated strong earthquakes triggered by the corresponding active faults. However, the lack of adequate methods for the documentation and monitoring of mountain-building processes has resulted in a shortage of quantitative estimates of orogenic and eroded volumes. A strong earthquake and its associated co-seismic landslides represent a sudden pulse in landscape evolution in tectonically active areas. The 2008 Mw 7.9 Wenchuan earthquake dramatically modified the topography of the Longmen Shan region. Based on topographic data before the earthquake and stereo pairs of post-earthquake remote sensing imagery, we derived pre- and post-earthquake DEMs (digital elevation models) of the three regions along the Longmen Shan Thrust Belt. By comparing the geomorphic features before and after the earthquake, we find that the Wenchuan earthquake smoothed the steep relief and caused a co-seismic uplift of the Longmen Shan region. The medium-relief regions increased; however, the high-relief regions decreased, indicating that the local relief is controlled by repeated strong earthquakes. The changed slope aspect indicates that the formation and modification of the east- and west-facing slopes are controlled by tectonic events in the Longmen Shan region, which might be associated with the regional stress field. However, the unchanged aspects of other slopes might be controlled by long-term erosion rather than tectonic events. The topographic changes, landslide volume and co-seismic uplift indicate that the greatest seismically induced denudation occurred in association with a thrust faulting mechanism and low-angle fault geometry. Our findings reveal that the local relief has been shaped by the localized, seismically induced high rate of denudation within the plateau margins, and that the formation of local relief is also related to tectonic events, especially the events that have occurred on low

  19. GPU accelerated Discrete Element Method (DEM) molecular dynamics for conservative, faceted particle simulations

    NASA Astrophysics Data System (ADS)

    Spellings, Matthew; Marson, Ryan L.; Anderson, Joshua A.; Glotzer, Sharon C.

    2017-04-01

    Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks-Chandler-Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method is a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.

  20. An efficient and comprehensive method for drainage network extraction from DEM with billions of pixels using a size-balanced binary search tree

    NASA Astrophysics Data System (ADS)

    Bai, Rui; Li, Tiejian; Huang, Yuefei; Li, Jiaye; Wang, Guangqian

    2015-06-01

    With the increasing resolution of digital elevation models (DEMs), computational efficiency problems have been encountered when extracting the drainage network of a large river basin at billion-pixel scales. The efficiency of the most time-consuming depression-filling pretreatment has been improved by using the O(NlogN) complexity least-cost path search method, but the complete extraction steps following this method have not been proposed and tested. In this paper, an improved O(NlogN) algorithm was proposed by introducing a size-balanced binary search tree (BST) to improve the efficiency of the depression-filling pretreatment further. The following extraction steps, including the flow direction determination and the upslope area accumulation, were also redesigned to benefit from this improvement. Therefore, an efficient and comprehensive method was developed. The method was tested to extract drainage networks of 31 river basins with areas greater than 500,000 km2 from the 30-m-resolution ASTER GDEM and two sub-basins with areas of approximately 1000 km2 from the 1-m-resolution airborne LiDAR DEM. Complete drainage networks with both vector features and topographic parameters were obtained with time consumptions in O(NlogN) complexity. The results indicate that the developed method can be used to extract entire drainage networks from DEMs with billions of pixels with high efficiency.

  1. The role of method of production and resolution of the DEM on slope-units delineation for landslide susceptibility assessment - Ubaye Valley, French Alps case study

    NASA Astrophysics Data System (ADS)

    Schlögel, Romy; Marchesini, Ivan; Alvioli, Massimiliano; Reichenbach, Paola; Rossi, Mauro; Malet, Jean-Philippe

    2016-04-01

    Landslide susceptibility assessment forms the basis of any hazard mapping, which is one of the essential parts of quantitative risk mapping. For the same study area, different susceptibility maps can be achieved depending on the type of susceptibility mapping methods, mapping unit, and scale. In the Ubaye Valley (South French Alps), we investigate the effect of resolution and method of production of the DEM to delineate slope units for landslide susceptibility mapping method. Slope units delineation has been processed using multiple combinations of circular variance and minimum area size values, which are the input parameters for a new software for terrain partitioning. We rely on this method taking into account homogeneity of aspect direction inside each unit and inhomogeneity between different units. We computed slope units delineation for 5, 10 and 25 meters resolution DEM, and investigate statistical distributions of morphometric variables within the different polygons. Then, for each different slope units partitioning, we calibrated a landslide susceptibility model, considering landslide bodies and scarps as a dependent variable (binary response). This work aims to analyse the role of DEM resolution on slope-units delineation for landslide susceptibility assessment. Area Under the Curve of the Receiver Operating Characteristic is investigated for the susceptibility model calculations. In addition, we analysed further the performance of the Logistic Regression Model by looking at the percentage of significant variable in the statistical analyses. Results show that smaller slope units have a better chance of containing a smaller number of thematic and morphometric variables, allowing for an easier classification. Reliability of the models according to the DEM resolution considered as well as scarp area and landslides bodies presence/absence as dependent variable are discussed.

  2. On the investigation of the performances of a DEM-based hydrogeomorphic floodplain identification method in a large urbanized river basin: the Tiber river case study in Italy

    NASA Astrophysics Data System (ADS)

    Nardi, Fernando; Biscarini, Chiara; Di Francesco, Silvia; Manciola, Piergiorgio

    2013-04-01

    consequently identified as those river buffers, draining towards the channel, with an elevation that is less than the maximum flow depth of the corresponding outlet. Keeping in mind that this hydrogeomorhic model performances are strictly related to the quality and properties of the input DEM and that the intent of this kind of methodology is not to substitute standard flood modeling and mapping methods, in this work the performances of this approach are qualitatively evaluated by comparing results with standard flood maps. The Tiber river basin was selected as case study, one of the main river basins in Italy covering a drainage area of approximately 17.000 km2. This comparison is interesting for understanding the performance of the model in a large and complex domain where the impact of the urbanization matrix is significant. Results of this investigation confirm the potential of such DEM-based floodplain mapping models for providing a fast timely homogeneous and continuous inundation scenario to urban planners and decision makers, but also the drawbacks of using such methodology where the humans are significantly and rapidly modifying the surface properties.

  3. Shading-based DEM refinement under a comprehensive imaging model

    NASA Astrophysics Data System (ADS)

    Peng, Jianwei; Zhang, Yi; Shan, Jie

    2015-12-01

    This paper introduces an approach to refine coarse digital elevation models (DEMs) based on the shape-from-shading (SfS) technique using a single image. Different from previous studies, this approach is designed for heterogeneous terrain and derived from a comprehensive (extended) imaging model accounting for the combined effect of atmosphere, reflectance, and shading. To solve this intrinsic ill-posed problem, the least squares method and a subsequent optimization procedure are applied in this approach to estimate the shading component, from which the terrain gradient is recovered with a modified optimization method. Integrating the resultant gradients then yields a refined DEM at the same resolution as the input image. The proposed SfS method is evaluated using 30 m Landsat-8 OLI multispectral images and 30 m SRTM DEMs. As demonstrated in this paper, the proposed approach is able to reproduce terrain structures with a higher fidelity; and at medium to large up-scale ratios, can achieve elevation accuracy 20-30% better than the conventional interpolation methods. Further, this property is shown to be stable and independent of topographic complexity. With the ever-increasing public availability of satellite images and DEMs, the developed technique is meaningful for global or local DEM product refinement.

  4. The Oracle of DEM

    NASA Astrophysics Data System (ADS)

    Gayley, Kenneth

    2013-06-01

    The predictions of the famous Greek oracle of Delphi were just ambiguous enough to seem to convey information, yet the user was only seeing their own thoughts. Are there ways in which X-ray spectral analysis is like that oracle? It is shown using heuristic, generic response functions to mimic actual spectral inversion that the widely known ill conditioning, which makes formal inversion impossible in the presence of random noise, also makes a wide variety of different source distributions (DEMs) produce quite similar X-ray continua and resonance-line fluxes. Indeed, the sole robustly inferable attribute for a thermal, optically thin resonance-line spectrum with normal abundances in CIE is its average temperature. The shape of the DEM distribution, on the other hand, is not well constrained, and may actually depend more on the analysis method, no matter how sophisticated, than on the source plasma. The case is made that X-ray spectra can tell us average temperature, and metallicity, and absorbing column, but the main thing it cannot tell us is the main thing it is most often used to infer: the differential emission measure distribution.

  5. Advanced Usability Evaluation Methods

    DTIC Science & Technology

    2007-04-01

    tracking in usability evaluation : A practitioner’s guide. In J. Hyönä, R. Radach, & H. Deubel. (Eds.), The mind’s eye: Cognitive and applied...Advanced Usability Evaluation Methods Terence S. Andre, Lt Col, USAF Margaret Schurig, Human Factors Design Specialist, The Boeing Co...TITLE AND SUBTITLE Advanced Usability Evaluation Methods 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  6. Morphometric evaluation of the Afşin-Elbistan lignite basin using kernel density estimation and Getis-Ord's statistics of DEM derived indices, SE Turkey

    NASA Astrophysics Data System (ADS)

    Sarp, Gulcan; Duzgun, Sebnem

    2015-11-01

    A morphometric analysis of river network, basins and relief using geomorphic indices and geostatistical analyses of Digital Elevation Model (DEM) are useful tools for discussing the morphometric evolution of the basin area. In this study, three different indices including valley floor width to height ratio (Vf), stream gradient (SL), and stream sinuosity were applied to Afşin-Elbistan lignite basin to test the imprints of tectonic activity. Perturbations of these indices are usually indicative of differences in the resistance of outcropping lithological units to erosion and active faulting. To map the clusters of high and low indices values, the Kernel density estimation (K) and the Getis-Ord Gi∗ statistics were applied to the DEM-derived indices. The K method and Gi∗ statistic highlighting hot spots and cold spots of the SL index, the stream sinuosity and the Vf index values helped to identify the relative tectonic activity of the basin area. The results indicated that the estimation by the K and Gi∗ including three conceptualization of spatial relationships (CSR) for hot spots (percent volume contours 50 and 95 categorized as high and low respectively) yielded almost similar results in regions of high tectonic activity and low tectonic activity. According to the K and Getis-Ord Gi∗ statistics, the northern, northwestern and southern parts of the basin indicates a high tectonic activity. On the other hand, low elevation plain in the central part of the basin area shows a relatively low tectonic activity.

  7. Quality Test Various Existing dem in Indonesia Toward 10 Meter National dem

    NASA Astrophysics Data System (ADS)

    Amhar, Fahmi

    2016-06-01

    Indonesia has various DEM from many sources and various acquisition date spreaded in the past two decades. There are DEM from spaceborne system (Radarsat, TerraSAR-X, ALOS, ASTER-GDEM, SRTM), airborne system (IFSAR, Lidar, aerial photos) and also terrestrial one. The research objective is the quality test and how to extract best DEM in particular area. The method is using differential GPS levelling using geodetic GPS equipment on places which is ensured not changed during past 20 years. The result has shown that DEM from TerraSAR-X and SRTM30 have the best quality (rmse 3.1 m and 3.5 m respectively). Based on this research, it was inferred that these parameters are still positively correlated with the basic concept, namely that the lower and the higher the spatial resolution of a DEM data, the more imprecise the resulting vertical height.

  8. DEM-based research on the landform features of China

    NASA Astrophysics Data System (ADS)

    Tang, Guoan; Liu, Aili; Li, Fayuan; Zhou, Jieyu

    2006-10-01

    Landforms can be described and identified by parameterization of digital elevation model (DEM). This paper discusses the large-scale geomorphological characteristics of China based on numerical analysis of terrain parameters and develop a methodology for characterizing landforms from DEMs. The methodology is implemented as a two-step process. First, terrain variables are derived from a 1-km DEM in a given statistical unit including local relief, the earth's surface incision, elevation variance coefficient, roughness, mean slope and mean elevation. Second, every parameter regarded as a single-band image is combined into a multi-band image. Then ISODATA unsupervised classification and the Bayesian technique of Maximum Likelihood supervised classification are applied for landform classification. The resulting landforms are evaluated by the means of Stratified Sampling with respect to an existing map and the overall classification accuracy reaches to rather high value. It's shown that the derived parameters carry sufficient physiographic information and can be used for landform classification. Since the classification method integrates manifold terrain indexes, conquers the limitation of the subjective cognition, as well as a low cost, apparently it could represent an applied foreground in the classification of macroscopic relief forms. Furthermore, it exhibits significance in consummating the theory and the methodology of DEMs on digital terrain analysis.

  9. Development of parallel DEM for the open source code MFIX

    SciTech Connect

    Gopalakrishnan, Pradeep; Tafti, Danesh

    2013-02-01

    The paper presents the development of a parallel Discrete Element Method (DEM) solver for the open source code, Multiphase Flow with Interphase eXchange (MFIX) based on the domain decomposition method. The performance of the code was evaluated by simulating a bubbling fluidized bed with 2.5 million particles. The DEM solver shows strong scalability up to 256 processors with an efficiency of 81%. Further, to analyze weak scaling, the static height of the fluidized bed was increased to hold 5 and 10 million particles. The results show that global communication cost increases with problem size while the computational cost remains constant. Further, the effects of static bed height on the bubble hydrodynamics and mixing characteristics are analyzed.

  10. Evaluation of the influence of metabolic processes and body composition on cognitive functions: Nutrition and Dementia Project (NutrDem Project).

    PubMed

    Magierski, R; Kłoszewska, I; Sobow, T

    2014-11-01

    The global increase in the prevalence of dementia and its associated comorbidities and consequences has stimulated intensive research focused on better understanding of the basic mechanisms and the possibilities to prevent and/or treat cognitive decline or dementia. The etiology of cognitive decline and dementia is very complex and is based upon the interplay of genetic and environmental factors. A growing body of epidemiological evidence has suggested that metabolic syndrome and its components may be important in the development of cognitive decline. Furthermore, an abnormal body mass index in middle age has been considered as a predictor for the development of dementia. The Nutrition and Dementia Project (NutrDem Project) was started at the Department of Old Age Psychiatry and Psychotic Disorders with close cooperation with Department of Medical Psychology. The aim of this study is to determine the effect of dietary patterns, nutritional status, body composition (with evaluation of visceral fat) and basic regulatory mechanisms of metabolism in elderly patients on cognitive functions and the risk of cognitive impairment (mild cognitive impairment and/or dementia).

  11. Using Distinct-Element Method (DEM) to Investigate Tsaoling Landslide Induced by Chi-Chi Earthquake, Taiwan.

    NASA Astrophysics Data System (ADS)

    Tang, C.; Hu, J.; Lin, M.

    2006-12-01

    Large landslides occurred in the mountainous area near the epicenter on Sept. 21st, 1999, Chi-Chi earthquake in central Taiwan. These landslides were triggered by the Mw = 7.6 earthquake, which resulted in more than 2,400 people casualties and widespread damage. The 1999 Chi-Chi earthquake triggered a catastrophic Tsaloing landslide, which mobilized about 0.125 km3 of rock and soil that slid across the Chingshui River and created a 5 km long natural dam. One fifth of the landslide mass dropped into the Chingshui River, the rest crossed over Chingshui River. At least five large landslides occurred in Tsaoling area are induced by big earthquakes and downpours since 1862 to 1999. Geological investigation shows that the prevailing attitude of sedimentary formation is about N50W with a dipping angle of 12S. First we used Newmark Method to calculate the stability of slope distinct-element method to simulate Tsaoling landslide (PFC3d and PFC2d discrete element code). Because of the discrete, particle-based nature of the model, specification of material properties and boundary condition is more difficult than available continuum methods. The user may specify micro-properties that control particle-particle interaction, but have no way to directly prescribe the micro-properties of the model such as Young's modulus(E), unconfined compressive strength (UCS), Cohesion(C0), Possion's ratio(£h), coefficient of friction(£g), porosity, and the initial stress state. As a result, the process of generating an initial model with the appropriate material behavior and initial stress state is by trial-and-error, requiring the use of numerical equivalent of a biaxial rock mechanics test rig to derive the rock mechanical macro-properties. We conclude that the characteristics of Tsaoling landslide process are: (1) the rocks were bond together on sliding, and (2) the frictional coefficient was very small.

  12. Using Economic Methods Evaluatively

    ERIC Educational Resources Information Center

    King, Julian

    2017-01-01

    As evaluators, we are often asked to determine whether policies and programs provide value for the resources invested. Addressing that question can be a quandary, and, in some cases, evaluators question whether cost-benefit analysis is fit for this purpose. With increased interest globally in social enterprise, impact investing, and social impact…

  13. Failure and frictional sliding envelopes in three-dimensional stress space: Insights from Distinct Element Method (DEM) models and implications for the brittle-ductile transition of rock

    NASA Astrophysics Data System (ADS)

    Schöpfer, Martin; Childs, Conrad; Manzocchi, Tom

    2013-04-01

    Rocks deformed at low confining pressure are brittle, meaning that after peak stress the strength decreases to a residual value determined by frictional sliding. The difference between the peak and residual value is the stress drop. At high confining pressure, however, no stress drop occurs. The transition pressure at which no loss in strength occurs is a possible definition of the brittle-ductile transition. The Distinct Element Method (DEM) is used to illustrate how this type of brittle-ductile transition emerges from a simple model in which rock is idealised as an assemblage of cemented spherical unbreakable grains. These bonded particle models are subjected to loading under constant mean stress and stress ratio conditions using distortional periodic space, which eliminates possible boundary effects arising from the usage of rigid loading platens. Systematic variation of both mean stress and stress ratio allowed determination of the complete three dimensional yield, peak stress and residual strength envelopes. The models suggest that the brittle-ductile transition is a mean stress and stress ratio dependent space curve, which cannot be adequately described by commonly used failure criteria (e.g., Mohr-Coulomb, Drucker-Prager). The model peak strength data exhibit an intermediate principal stress dependency which is, at least qualitatively, similar to that observed for natural rocks deformed under polyaxial laboratory conditions. Comparison of failure envelopes determined for bonded particle models with and without bond shear failure suggests that the non-linear pressure dependence of strength (concave failure envelopes) is, at high mean stress, the result of microscopic shear failure, a result consistent with earlier two-dimensional numerical multiple-crack simulations [D. A. Lockner & T. R. Madden, JGR, Vol. 96, No. B12, 1991]. Our results may have implications for a wide range of geophysical research areas, including the strength of the crust, the seismogenic

  14. Designing Tunnel Support in Jointed Rock Masses Via the DEM

    NASA Astrophysics Data System (ADS)

    Boon, C. W.; Houlsby, G. T.; Utili, S.

    2015-03-01

    A systematic approach of using the distinct element method (DEM) to provide useful insights for tunnel support in moderately jointed rock masses is illustrated. This is preceded by a systematic study of common failure patterns for unsupported openings in a rock mass intersected by three independent sets of joints. The results of our simulations show that a qualitative description of the failure patterns using specific descriptors is unattainable. Then, it is shown that DEM analyses can be employed in the preliminary design phase of tunnel supports to determine the main parameters of a support consisting of rock bolts or one lining or a combination of both. A comprehensive parametric analysis investigating the effect of bolt bonded length, bolt spacing, bolt length, bolt pretension, bolt stiffness and lining thickness on the tunnel convergence is illustrated. The highlight of the proposed approach of preliminary support design is the use of a rock bolt and lining interaction diagram to evaluate the relative effectiveness of rock bolts and lining thickness in the design of the tunnel support. The concept of interaction diagram can be used to assist the engineer in making preliminary design decisions given a target maximum allowable convergence. In addition, DEM simulations were validated against available elastic solutions. To the authors' knowledge, this is the first verification of DEM calculations for supported openings against elastic solutions. The methodologies presented in this article are illustrated through 2-D plane strain analyses for the preliminary design stage. More rigorous analyses incorporating 3-D effects have not been attempted in this article because the longitudinal displacement profile is highly sensitive to the joint orientations with respect to the tunnel axis, and cannot be established accurately in 2-D. The methodologies and concepts discussed in this article, however, have the potential to be extended to 3-D analyses.

  15. Voltammetry Method Evaluation

    SciTech Connect

    Hoyt, N.; Pereira, C.; Willit, J.; Williamson, M.

    2016-07-29

    The purpose of the ANL MPACT Voltammetry project is to evaluate the suitability of previously developed cyclic voltammetry techniques to provide electroanalytical measurements of actinide concentrations in realistic used fuel processing scenarios. The molten salts in these scenarios are very challenging as they include high concentrations of multiple electrochemically active species, thereby creating a variety of complications. Some of the problems that arise therein include issues related to uncompensated resistance, cylindrical diffusion, and alloying of the electrodeposited metals. Improvements to the existing voltammetry technique to account for these issues have been implemented, resulting in good measurements of actinide concentrations across a wide range of adverse conditions.

  16. Selection: Evaluation and methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Procedures to collect and to analyze data for genetic improvement of dairy cattle are described. Methods of identification and milk recording are presented. Selection traits include production (milk, fat, and protein yields and component percentages), conformation (final score and linear type traits...

  17. Hydrologic enforcement of lidar DEMs

    USGS Publications Warehouse

    Poppenga, Sandra K.; Worstell, Bruce B.; Danielson, Jeffrey J.; Brock, John C.; Evans, Gayla A.; Heidemann, H. Karl

    2014-01-01

    Hydrologic-enforcement (hydro-enforcement) of light detection and ranging (lidar)-derived digital elevation models (DEMs) modifies the elevations of artificial impediments (such as road fills or railroad grades) to simulate how man-made drainage structures such as culverts or bridges allow continuous downslope flow. Lidar-derived DEMs contain an extremely high level of topographic detail; thus, hydro-enforced lidar-derived DEMs are essential to the U.S. Geological Survey (USGS) for complex modeling of riverine flow. The USGS Coastal and Marine Geology Program (CMGP) is integrating hydro-enforced lidar-derived DEMs (land elevation) and lidar-derived bathymetry (water depth) to enhance storm surge modeling in vulnerable coastal zones.

  18. Satellite-derived Digital Elevation Model (DEM) selection, preparation and correction for hydrodynamic modelling in large, low-gradient and data-sparse catchments

    NASA Astrophysics Data System (ADS)

    Jarihani, Abdollah A.; Callow, John N.; McVicar, Tim R.; Van Niel, Thomas G.; Larsen, Joshua R.

    2015-05-01

    Digital Elevation Models (DEMs) that accurately replicate both landscape form and processes are critical to support modelling of environmental processes. Topographic accuracy, methods of preparation and grid size are all important for hydrodynamic models to efficiently replicate flow processes. In remote and data-scarce regions, high resolution DEMs are often not available and therefore it is necessary to evaluate lower resolution data such as the Shuttle Radar Topography Mission (SRTM) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) for use within hydrodynamic models. This paper does this in three ways: (i) assessing point accuracy and geometric co-registration error of the original DEMs; (ii) quantifying the effects of DEM preparation methods (vegetation smoothed and hydrologically-corrected) on hydrodynamic modelling relative accuracy; and (iii) quantifying the effect of the hydrodynamic model grid size (30-2000 m) and the associated relative computational costs (run time) on relative accuracy in model outputs. We initially evaluated the accuracy of the original SRTM (∼30 m) seamless C-band DEM (SRTM DEM) and second generation products from the ASTER (ASTER GDEM) against registered survey marks and altimetry data points from the Ice, Cloud, and land Elevation Satellite (ICESat). SRTM DEM (RMSE = 3.25 m,) had higher accuracy than ASTER GDEM (RMSE = 7.43 m). Based on these results, the original version of SRTM DEM, the ASTER GDEM along with vegetation smoothed and hydrologically corrected versions were prepared and used to simulate three flood events along a 200 km stretch of the low-gradient Thompson River, in arid Australia (using five metrics: peak discharge, peak height, travel time, terminal water storage and flood extent). The hydrologically corrected DEMs performed best across these metrics in simulating floods compared with vegetation smoothed DEMs and original DEMs. The response of model performance to grid size was non

  19. Evaluation methods for hospital projects.

    PubMed

    Buelow, Janet R; Zuckweiler, Kathryn M; Rosacker, Kirsten M

    2010-01-01

    The authors report the findings of a survey of hospital managers on the utilization of various project selection and evaluation methodologies. The focus of the analysis was the empirical relationship between a portfolio of project evaluation(1) methods actually utilized for a given project and several measures of perceived project success. The analysis revealed that cost-benefit analysis and top management support were the two project evaluation methods used most often by the hospital managers. The authors' empirical assessment provides evidence that top management support is associated with overall project success.

  20. The Double Hierarchy Method. A parallel 3D contact method for the interaction of spherical particles with rigid FE boundaries using the DEM

    NASA Astrophysics Data System (ADS)

    Santasusana, Miquel; Irazábal, Joaquín; Oñate, Eugenio; Carbonell, Josep Maria

    2016-07-01

    In this work, we present a new methodology for the treatment of the contact interaction between rigid boundaries and spherical discrete elements (DE). Rigid body parts are present in most of large-scale simulations. The surfaces of the rigid parts are commonly meshed with a finite element-like (FE) discretization. The contact detection and calculation between those DE and the discretized boundaries is not straightforward and has been addressed by different approaches. The algorithm presented in this paper considers the contact of the DEs with the geometric primitives of a FE mesh, i.e. facet, edge or vertex. To do so, the original hierarchical method presented by Horner et al. (J Eng Mech 127(10):1027-1032, 2001) is extended with a new insight leading to a robust, fast and accurate 3D contact algorithm which is fully parallelizable. The implementation of the method has been developed in order to deal ideally with triangles and quadrilaterals. If the boundaries are discretized with another type of geometries, the method can be easily extended to higher order planar convex polyhedra. A detailed description of the procedure followed to treat a wide range of cases is presented. The description of the developed algorithm and its validation is verified with several practical examples. The parallelization capabilities and the obtained performance are presented with the study of an industrial application example.

  1. Incorporating DEM uncertainty in coastal inundation mapping.

    PubMed

    Leon, Javier X; Heuvelink, Gerard B M; Phinn, Stuart R

    2014-01-01

    Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey

  2. Incorporating DEM Uncertainty in Coastal Inundation Mapping

    PubMed Central

    Leon, Javier X.; Heuvelink, Gerard B. M.; Phinn, Stuart R.

    2014-01-01

    Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey

  3. Methodologies for watershed modeling with GIS and DEMs for the parameterization of the WEPP model

    NASA Astrophysics Data System (ADS)

    Cochrane, Thomas Arey

    Two methods called the Hillslope and Flowpath methods were developed that use geographical information systems (GIS) and digital elevation models (DEMs) to assess water erosion in small watersheds with the Water Erosion Prediction Project (WEPP) model. The Hillslope method is an automated method for the application of WEPP through the extraction of hillslopes and channels from DEMs. Each hillslope is represented as a rectangular area with a representative slope profile that drains to the top or sides of a single channel. The Hillslope method was further divided into the Calcleng and Chanleng methods, which are similar in every way except on how the hillslope lengths are calculated. The Calcleng method calculates a representative length of hillslope based on the weighted lengths of all flowpaths in a hillslope as identified through a DEM. The Chanleng method calculates the length of hillslopes adjacent to channels by matching the width of the hillslope to the length of adjacent channel. The Flowpath method works by applying the WEPP model to all possible flowpaths within a watershed as identified from a DEM. However, this method does not currently have a channel routing component, which limits its use to predicting spatially variable erosion on hillslopes within the watershed or from watersheds whose channels are not in a depositional or erodible mode. These methods were evaluated with six research watersheds from across the U.S., one from Treynor, Iowa, two from Watkinsville, Georgia, and three from Holly Springs, Mississippi. The effects of using different-sized DEM resolutions on simulations and the ability to accurately predict sediment yield and runoff from different event sizes were studied. Statistical analyses for all methods, resolutions, and event sizes were performed by comparing predicted vs. measured runoff and sediment yield from the watershed outlets on an event by event basis. Comparisons to manual applications by expert users and comparisons of

  4. LNG Safety Assessment Evaluation Methods

    SciTech Connect

    Muna, Alice Baca; LaFleur, Angela Christine

    2015-05-01

    Sandia National Laboratories evaluated published safety assessment methods across a variety of industries including Liquefied Natural Gas (LNG), hydrogen, land and marine transportation, as well as the US Department of Defense (DOD). All the methods were evaluated for their potential applicability for use in the LNG railroad application. After reviewing the documents included in this report, as well as others not included because of repetition, the Department of Energy (DOE) Hydrogen Safety Plan Checklist is most suitable to be adapted to the LNG railroad application. This report was developed to survey industries related to rail transportation for methodologies and tools that can be used by the FRA to review and evaluate safety assessments submitted by the railroad industry as a part of their implementation plans for liquefied or compressed natural gas storage ( on-board or tender) and engine fueling delivery systems. The main sections of this report provide an overview of various methods found during this survey. In most cases, the reference document is quoted directly. The final section provides discussion and a recommendation for the most appropriate methodology that will allow efficient and consistent evaluations to be made. The DOE Hydrogen Safety Plan Checklist was then revised to adapt it as a methodology for the Federal Railroad Administration’s use in evaluating safety plans submitted by the railroad industry.

  5. Evaluation Using Sequential Trials Methods.

    ERIC Educational Resources Information Center

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  6. Radar and Lidar Radar DEM

    NASA Technical Reports Server (NTRS)

    Liskovich, Diana; Simard, Marc

    2011-01-01

    Using radar and lidar data, the aim is to improve 3D rendering of terrain, including digital elevation models (DEM) and estimates of vegetation height and biomass in a variety of forest types and terrains. The 3D mapping of vegetation structure and the analysis are useful to determine the role of forest in climate change (carbon cycle), in providing habitat and as a provider of socio-economic services. This in turn will lead to potential for development of more effective land-use management. The first part of the project was to characterize the Shuttle Radar Topography Mission DEM error with respect to ICESat/GLAS point estimates of elevation. We investigated potential trends with latitude, canopy height, signal to noise ratio (SNR), number of LiDAR waveform peaks, and maximum peak width. Scatter plots were produced for each variable and were fitted with 1st and 2nd degree polynomials. Higher order trends were visually inspected through filtering with a mean and median filter. We also assessed trends in the DEM error variance. Finally, a map showing how DEM error was geographically distributed globally was created.

  7. High-Precision DEM Generation Using Satellite-Borne InSAR Technology

    NASA Astrophysics Data System (ADS)

    Li, Tao; Tang, Xinming; Gao, Xiaoming; Chen, Weinan; Chen, Qianfu; Wu, Danqin

    2016-08-01

    Satellite-borne InSAR is useful in generating DEM globally. Especially after TanDEM-X interferometer started its mission in 2010. In this paper, we analyze the interferometric geometry in surveying and mapping application. And we locate main error sources, i.e., phase error and baseline error, using the parameters extracted from TanDEM-X interferometer. The phase error is suppressed using multi-look iteration. The rich textures as well as the high phase accuracy are both maintained through this method. The baseline error is reduced by using the long-and-short baseline combination method. Finally, we propose to mosaic the ascending and descending DEM according to coherence values to reduce the low coherent areas. Experiments in flat ground, hill and mountain land are conducted to test the feasibility of the proposed methods. Results demonstrate that TanDEM-X may be used in high-precision DEM generation.

  8. Method for evaluating material viscoelasticity

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Yamaguchi, Takao

    2004-01-01

    A method for evaluating the viscoelasticity of materials under oscillation load is proposed. In the method, a material under test is connected to a mass, which generates an oscillating inertial force after the mass is manually struck using a hammer. A pneumatic linear bearing is used to realize linear motion with sufficiently small friction acting on the mass that is the moving part of the bearing. The inertial force acting on the mass is determined highly accurately by means of measuring the velocity of the mass using an optical interferometer.

  9. DEM Particle Fracture Model

    SciTech Connect

    Zhang, Boning; Herbold, Eric B.; Homel, Michael A.; Regueiro, Richard A.

    2015-12-01

    An adaptive particle fracture model in poly-ellipsoidal Discrete Element Method is developed. The poly-ellipsoidal particle will break into several sub-poly-ellipsoids by Hoek-Brown fracture criterion based on continuum stress and the maximum tensile stress in contacts. Also Weibull theory is introduced to consider the statistics and size effects on particle strength. Finally, high strain-rate split Hopkinson pressure bar experiment of silica sand is simulated using this newly developed model. Comparisons with experiments show that our particle fracture model can capture the mechanical behavior of this experiment very well, both in stress-strain response and particle size redistribution. The effects of density and packings o the samples are also studied in numerical examples.

  10. Impacts of DEM uncertainties on critical source areas identification for non-point source pollution control based on SWAT model

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Dong, Guangxia; Wang, Qingrui; Liu, Lumeng; Yu, Wenwen; Men, Cong; Liu, Ruimin

    2016-09-01

    The impacts of different digital elevation model (DEM) resolutions, sources and resampling techniques on nutrient simulations using the Soil and Water Assessment Tool (SWAT) model have not been well studied. The objective of this study was to evaluate the sensitivities of DEM resolutions (from 30 m to 1000 m), sources (ASTER GDEM2, SRTM and Topo-DEM) and resampling techniques (nearest neighbor, bilinear interpolation, cubic convolution and majority) to identification of non-point source (NPS) critical source area (CSA) based on nutrient loads using the SWAT model. The Xiangxi River, one of the main tributaries of Three Gorges Reservoir in China, was selected as the study area. The following findings were obtained: (1) Elevation and slope extracted from the DEMs were more sensitive to DEM resolution changes. Compared with the results of the 30 m DEM, 1000 m DEM underestimated the elevation and slope by 104 m and 41.57°, respectively; (2) The numbers of subwatersheds and hydrologic response units (HRUs) were considerably influenced by DEM resolutions, but the total nitrogen (TN) and total phosphorus (TP) loads of each subwatershed showed higher correlations with different DEM sources; (3) DEM resolutions and sources had larger effects on CSAs identifications, while TN and TP CSAs showed different response to DEM uncertainties. TN CSAs were more sensitive to resolution changes, exhibiting six distribution patterns at all DEM resolutions. TP CSAs were sensitive to source and resampling technique changes, exhibiting three distribution patterns for DEM sources and two distribution patterns for DEM resampling techniques. DEM resolutions and sources are the two most sensitive SWAT model DEM parameters that must be considered when nutrient CSAs are identified.

  11. Processing, validating, and comparing DEMs for geomorphic application on the Puna de Atacama Plateau, northwest Argentina

    NASA Astrophysics Data System (ADS)

    Purinton, Benjamin; Bookhagen, Bodo

    2016-04-01

    This study analyzes multiple topographic datasets derived from various remote-sensing methods from the Pocitos Basin of the central Puna Plateau in northwest Argentina at the border to Chile. Here, the arid climate and clear atmospheric conditions and lack of vegetation provide ideal conditions for remote sensing and Digital Elevation Model (DEM) comparison. We compare the following freely available DEMs: SRTM-X (spatial resolution of ~30 m), SRTM-C v4.1 (90 m), and ASTER GDEM2 (30 m). Additional DEMs for comparison are generated from optical and radar datasets acquired freely (ASTER Level 1B stereo pairs and Sentinal-1A radar), through research agreements (RapidEye Level 1B scenes, ALOS radar, and ENVISAT radar), and through commercial sources (TerraSAR-X / TanDEM-X radar). DEMs from ASTER (spatial resolution of 15 m) and RapidEye (~5-10 m) optical datasets are produced by standard photogrammetric techniques and have been post-processed for validation and alignment purposes. Because RapidEye scenes are captured at a low incidence angle (<20°) and stereo pairs are unavailable, merging and averaging methods of two to four overlapping scenes is explored for effective DEM generation. Sentinal-1A, TerraSAR-X / TanDEM-X, ALOS, and ENVISAT radar data is processed through interferometry resulting in DEMs with spatial resolutions ranging from 5 to 30 meters. The SRTM-X dataset serves as a control in the creation of further DEMs, as it is widely used in the geosciences and represents the highest-quality DEM currently available. All DEMs are validated against over 400,000 differential GPS (dGPS) measurements gathered during four field campaigns in 2012 and 2014 to 2016. Of these points, more than 250,000 lie within the Pocitos Basin with average vertical and horizontal accuracies of 0.95 m and 0.69 m, respectively. Dataset accuracy is judged by the lowest standard deviations of elevation compared with the dGPS data and with the SRTM-X control DEM. Of particular interest in

  12. Local validation of EU-DEM using Least Squares Collocation

    NASA Astrophysics Data System (ADS)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  13. Visualising DEM-related flood-map uncertainties using a disparity-distance equation algorithm

    NASA Astrophysics Data System (ADS)

    Brandt, S. Anders; Lim, Nancy J.

    2016-05-01

    The apparent absoluteness of information presented by crisp-delineated flood boundaries can lead to misconceptions among planners about the inherent uncertainties associated in generated flood maps. Even maps based on hydraulic modelling using the highest-resolution digital elevation models (DEMs), and calibrated with the most optimal Manning's roughness (n) coefficients, are susceptible to errors when compared to actual flood boundaries, specifically in flat areas. Therefore, the inaccuracies in inundation extents, brought about by the characteristics of the slope perpendicular to the flow direction of the river, have to be accounted for. Instead of using the typical Monte Carlo simulation and probabilistic methods for uncertainty quantification, an empirical-based disparity-distance equation that considers the effects of both the DEM resolution and slope was used to create prediction-uncertainty zones around the resulting inundation extents of a one-dimensional (1-D) hydraulic model. The equation was originally derived for the Eskilstuna River where flood maps, based on DEM data of different resolutions, were evaluated for the slope-disparity relationship. To assess whether the equation is applicable to another river with different characteristics, modelled inundation extents from the Testebo River were utilised and tested with the equation. By using the cross-sectional locations, water surface elevations, and DEM, uncertainty zones around the original inundation boundary line can be produced for different confidences. The results show that (1) the proposed method is useful both for estimating and directly visualising model inaccuracies caused by the combined effects of slope and DEM resolution, and (2) the DEM-related uncertainties alone do not account for the total inaccuracy of the derived flood map. Decision-makers can apply it to already existing flood maps, thereby recapitulating and re-analysing the inundation boundaries and the areas that are uncertain

  14. The Importance of Precise Digital Elevation Models (DEM) in Modelling Floods

    NASA Astrophysics Data System (ADS)

    Demir, Gokben; Akyurek, Zuhal

    2016-04-01

    Digital elevation Models (DEM) are important inputs for topography for the accurate modelling of floodplain hydrodynamics. Floodplains have a key role as natural retarding pools which attenuate flood waves and suppress flood peaks. GPS, LIDAR and bathymetric surveys are well known surveying methods to acquire topographic data. It is not only time consuming and expensive to obtain topographic data through surveying but also sometimes impossible for remote areas. In this study it is aimed to present the importance of accurate modelling of topography for flood modelling. The flood modelling for Samsun-Terme in Blacksea region of Turkey is done. One of the DEM is obtained from the point observations retrieved from 1/5000 scaled orthophotos and 1/1000 scaled point elevation data from field surveys at x-sections. The river banks are corrected by using the orthophotos and elevation values. This DEM is named as scaled DEM. The other DEM is obtained from bathymetric surveys. 296 538 number of points and the left/right bank slopes were used to construct the DEM having 1 m spatial resolution and this DEM is named as base DEM. Two DEMs were compared by using 27 x-sections. The maximum difference at thalweg of the river bed is 2m and the minimum difference is 20 cm between two DEMs. The channel conveyance capacity in base DEM is larger than the one in scaled DEM and floodplain is modelled in detail in base DEM. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. The model by using two DEMs were calibrated for a flood event (July 9, 2012). The roughness is considered as the calibration parameter. From comparison of input hydrograph at the upstream of the river and output hydrograph at the downstream of the river, the attenuation is obtained as 91% and 84% for the base DEM and scaled DEM, respectively. The time lag in hydrographs does not show any difference for two DEMs and it is obtained as 3 hours. Maximum flood extents differ for the two DEMs

  15. Wiederbeginn nach dem Zweiten Weltkrieg

    NASA Astrophysics Data System (ADS)

    Strecker, Heinrich; Bassenge-Strecker, Rosemarie

    Dieses Kapitel schildert zunächst die Ausgangslage für die Statistik in Deutschland nach dem Zweiten Weltkrieg: Der statistische Dienst in den Besatzungszonen musste teilweise erst aufgebaut und der statistische Unterricht an den Hochschulen wieder in Gang gebracht werden. In dieser Lage ergriff der Präsident des Bayerischen Statistischen Landesamtes, Karl Wagner, tatkräftig unterstützt von Gerhard Fürst, dem späteren Präsidenten des Statistischen Bundesamtes, die Initiative zur Neugründung der Deutschen Statistischen Gesellschaft (DStatG). Die Gründungsversammlung 1948 im München wurde zu einem Meilenstein in der Geschichte der DStatG. Ziel war es, alle Statistiker zur Zusammenarbeit anzuregen, ihre Qualifikation an das internationale Niveau heranzuführen und die Anwendung neuerer statistischer Methoden in der Praxis zu fördern. Es folgten 24 Jahre fruchtbarer Arbeit unter Karl Wagner (1948-1960) und Gerhard Fürst (1960-1972). Der Beitrag skizziert die Statistischen Wochen, die Tätigkeit der Ausschüsse und die Veröffentlichungen in dieser Zeit.

  16. A photogrammetric DEM of Greenland based on 1978-1987 aerial photos: validation and integration with laser altimetry and satellite-derived DEMs

    NASA Astrophysics Data System (ADS)

    Korsgaard, N. J.; Kjaer, K. H.; Nuth, C.; Khan, S. A.

    2014-12-01

    Here we present a DEM of Greenland covering all ice-free terrain and the margins of the GrIS and local glaciers and ice caps. The DEM is based on the 3534 photos used in the aero-triangulation which were recorded by the Danish Geodata Agency (then the Geodetic Institute) in survey campaigns spanning the period 1978-1987. The GrIS is covered tens of kilometers into the interior due to the large footprints of the photos (30 x 30 km) and control provided by the aero-triangulation. Thus, the data are ideal for providing information for analysis of ice marginal elevation change and also control for satellite-derived DEMs.The results of the validation, error assessments and predicted uncertainties are presented. We test the DEM using Airborne Topographic Mapper (IceBridge ATM) as reference data; evaluate the a posteriori covariance matrix from the aero-triangulation; and co-register DEM blocks of 50 x 50 km to ICESat laser altimetry in order to evaluate the coherency.We complement the aero-photogrammetric DEM with modern laser altimetry and DEMs derived from stereoscopic satellite imagery (AST14DMO) to examine the mass variability of the Northeast Greenland Ice Stream (NEGIS). Our analysis suggests that dynamically-induced mass loss started around 2003 and continued throughout 2014.

  17. Tracking the Effectiveness of Usability Evaluation Methods.

    DTIC Science & Technology

    2007-11-02

    We present a case study that tracks usability problems predicted with six usability evaluation methods (Claims Analysis, Cognitive Walkthrough , GOMS...Heuristic Evaluation , User Action Notation, and simply reading the specification) through a development process. We assess the methods predictive

  18. TecDEM: A MATLAB based toolbox for tectonic geomorphology, Part 1: Drainage network preprocessing and stream profile analysis

    NASA Astrophysics Data System (ADS)

    Shahzad, Faisal; Gloaguen, Richard

    2011-02-01

    We present TecDEM, a software shell implemented in MATLAB that applies tectonic geomorphologic tasks to digital elevation models (DEMs). The first part of this paper series describes drainage partitioning schemes and stream profile analysis. The graphical user interface of TecDEM provides several options: determining flow directions, stream vectorization, watershed delineation, Strahler order labeling, stream profile generation, knickpoints selection, Concavity, Steepness and Hack indices calculations. The knickpoints along selected streams as well as stream profile analysis, and Hack index per stream profile are computed using a semi-automatic method. TecDEM was used to extract and investigate the stream profiles in the Kaghan Valley (Northern Pakistan). Our interpretations of the TecDEM results correlate well with previous tectonic evolution models for this region. TecDEM is designed to assist geoscientists in applying complex tectonic geomorphology tasks to global DEM data.

  19. Shuttle radar DEM hydrological correction for erosion modelling in small catchments

    NASA Astrophysics Data System (ADS)

    Jarihani, Ben; Sidle, Roy; Bartley, Rebecca

    2016-04-01

    Digital Elevation Models (DEMs) that accurately replicate both landscape form and processes are critical to support modelling of environmental processes. Catchment and hillslope scale runoff and sediment processes (i.e., patterns of overland flow, infiltration, subsurface stormflow and erosion) are all topographically mediated. In remote and data-scarce regions, high resolution DEMs (LiDAR) are often not available, and moderate to course resolution digital elevation models (e.g., SRTM) have difficulty replicating detailed hydrological patterns, especially in relatively flat landscapes. Several surface reconditioning algorithms (e.g., Smoothing) and "Stream burning" techniques (e.g., Agree or ANUDEM), in conjunction with representation of the known stream networks, have been used to improve DEM performance in replicating known hydrology. Detailed stream network data are not available at regional and national scales, but can be derived at local scales from remotely-sensed data. This research explores the implication of high resolution stream network data derived from Google Earth images for DEM hydrological correction, instead of using course resolution stream networks derived from topographic maps. The accuracy of implemented method in producing hydrological-efficient DEMs were assessed by comparing the hydrological parameters derived from modified DEMs and limited high-resolution airborne LiDAR DEMs. The degree of modification is dominated by the method used and availability of the stream network data. Although stream burning techniques improve DEMs hydrologically, these techniques alter DEM characteristics that may affect catchment boundaries, stream position and length, as well as secondary terrain derivatives (e.g., slope, aspect). Modification of a DEM to better reflect known hydrology can be useful, however, knowledge of the magnitude and spatial pattern of the changes are required before using a DEM for subsequent analyses.

  20. Pragmatism, Evidence, and Mixed Methods Evaluation

    ERIC Educational Resources Information Center

    Hall, Jori N.

    2013-01-01

    Mixed methods evaluation has a long-standing history of enhancing the credibility of evaluation findings. However, using mixed methods in a utilitarian way implicitly emphasizes convenience over engaging with its philosophical underpinnings (Denscombe, 2008). Because of this, some mixed methods evaluators and social science researchers have been…

  1. TanDEM-X high resolution DEMs and their applications to flow modeling

    NASA Astrophysics Data System (ADS)

    Wooten, Kelly M.

    Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.

  2. Methods of Writing Instruction Evaluation.

    ERIC Educational Resources Information Center

    Lamb, Bill H.

    The Writing Program Director at Johnson County Community College (Kansas) developed quantitative measures for writing instruction evaluation which can support that institution's growing interest in and support for peer collaboration as a means to improving instructional quality. The first process (Interaction Analysis) has an observer measure…

  3. The Safeguards Evaluation Method for evaluating vulnerability to insider threats

    SciTech Connect

    Al-Ayat, R.A.; Judd, B.R.; Renis, T.A.

    1986-01-01

    As protection of DOE facilities against outsiders increases to acceptable levels, attention is shifting toward achieving comparable protection against insiders. Since threats and protection measures for insiders are substantially different from those for outsiders, new perspectives and approaches are needed. One such approach is the Safeguards Evaluation Method. This method helps in assessing safeguards vulnerabilities to theft or diversion of special nuclear material (SNM) by insiders. The Safeguards Evaluation Method-Insider Threat is a simple model that can be used by safeguards and security planners to evaluate safeguards and proposed upgrades at their own facilities. A discussion of the Safeguards Evaluation Method is presented in this paper.

  4. Urban DEM generation, analysis and enhancements using TanDEM-X

    NASA Astrophysics Data System (ADS)

    Rossi, Cristian; Gernhardt, Stefan

    2013-11-01

    This paper analyzes the potential of the TanDEM-X mission for the generation of urban Digital Elevation Models (DEMs). The high resolution of the sensors and the absence of temporal decorrelation are exploited. The interferometric chain and the problems encountered for correct mapping of urban areas are analyzed first. The operational Integrated TanDEM-X Processor (ITP) algorithms are taken as reference. The ITP main product is called the raw DEM. Whereas the ITP coregistration stage is demonstrated to be robust enough, large improvements in the raw DEM such as fewer percentages of phase unwrapping errors, can be obtained by using adaptive fringe filters instead of the conventional ones in the interferogram generation stage. The shape of the raw DEM in the layover area is also shown and determined to be regular for buildings with vertical walls. Generally, in the presence of layover, the raw DEM exhibits a height ramp, resulting in a height underestimation for the affected structure. Examples provided confirm the theoretical background. The focus is centered on high resolution DEMs produced using spotlight acquisitions. In particular, a raw DEM over Berlin (Germany) with a 2.5 m raster is generated and validated. For this purpose, ITP is modified in its interferogram generation stage by adopting the Intensity Driven Adaptive Neighbourhood (IDAN) algorithm. The height Root Mean Square Error (RMSE) between the raw DEM and a reference is about 8 m for the two classes defining the urban DEM: structures and non-structures. The result can be further improved for the structure class using a DEM generated with Persistent Scatterer Interferometry. A DEM fusion is thus proposed and a drop of about 20% in the RMSE is reported.

  5. Genetics | Selection: Evaluation and Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The procedures used for collecting and analyzing data for genetic improvement of dairy cattle are described. Methods of identification and milk recording are presented. Selection traits include production (milk, fat, and protein yields and component percentages), conformation (final score and linear...

  6. Icesat Validation of Tandem-X I-Dems Over the UK

    NASA Astrophysics Data System (ADS)

    Feng, L.; Muller, J.-P.

    2016-06-01

    From the latest TanDEM-X mission (bistatic X-Band interferometric SAR), globally consistent Digital Elevation Model (DEM) will be available from 2017, but their accuracy has not yet been fully characterised. This paper presents the methods and implementation of statistical procedures for the validation of the vertical accuracy of TanDEM-X iDEMs at grid-spacing of approximately 12.5 m, 30 m and 90 m based on processed ICESat data over the UK in order to assess their potential extrapolation across the globe. The accuracy of the TanDEM-X iDEM in UK was obtained as follows: against ICESat GLA14 elevation data, TanDEM-X iDEM has -0.028±3.654 m over England and Wales and 0.316 ± 5.286 m over Scotland for 12 m, -0.073 ± 6.575 m for 30 m, and 0.0225 ± 9.251 m at 90 m. Moreover, 90 % of all results at the three resolutions of TanDEM-X iDEM data (with a linear error at 90 % confidence level) are below 16.2 m. These validation results also indicate that derivative topographic parameters (slope, aspect and relief) have a strong effect on the vertical accuracy of the TanDEM-X iDEMs. In high-relief and large slope terrain, large errors and data voids are frequent, and their location is strongly influenced by topography, whilst in the low- to medium-relief and low slope sites, errors are smaller. ICESat derived elevations are heavily influenced by surface slope within the 70 m footprint as well as there being slope dependent errors in the TanDEM-X iDEMs.

  7. Mapping debris-flow hazard in Honolulu using a DEM

    USGS Publications Warehouse

    Ellen, Stephen D.; Mark, Robert K.; ,

    1993-01-01

    A method for mapping hazard posed by debris flows has been developed and applied to an area near Honolulu, Hawaii. The method uses studies of past debris flows to characterize sites of initiation, volume at initiation, and volume-change behavior during flow. Digital simulations of debris flows based on these characteristics are then routed through a digital elevation model (DEM) to estimate degree of hazard over the area.

  8. Improved Fluvial Geomorphic Interpretation Derived From DEM Differencing

    NASA Astrophysics Data System (ADS)

    Wheaton, J. M.; Brasington, J.; Brewer, P. A.; Darby, S.; Pasternack, G. B.; Sear, D.; Vericat, D.; Williams, R.

    2007-12-01

    Technological advances over the past two decades in remotely-sensed and ground-based topographic surveying technologies have made the rapid acquisition of topographic data in the fluvial environment possible at spatial resolutions and extents previously unimaginable. Consequently, monitoring geomorphic changes and estimating fluvial sediment budgets through comparing repeat topographic surveys (DEM differencing) has now become a tractable, affordable approach for both research purposes and long-term monitoring associated with river restoration. However, meaningful quantitative geomorphic interpretation of repeat topographic surveys has received little attention from either researchers or practitioners. Previous research has shown that quantitative estimates of erosion and deposition from DEM differencing are highly sensitive to DEM uncertainty, with minimum level of detection techniques typically discarding between 40% and 90% of the predicted changes. A series of new methods for segregating reach-scale sediment budgets into their specific process components, while accounting for the influence of DEM uncertainty, were developed and explored to highlight distinctive geomorphic signatures between different styles of change. To illustrate the interpretive power of the techniques in different settings, results are presented from analyses across a range of gravel-bed river types: a) the braided River Feshie, Scotland, UK; b) the formerly gravel-mined, wandering Sulphur Creek, California, USA; c) a heavily regulated reach of the Mokelumne River, California, USA that has been subjected to over 5 years of spawning habitat rehabilitation; and d) a restored meandering channel and floodplain of the Highland Water, New Forest, UK. Despite fundamentally different process suites between the study sites, the budget segregation technique is in each case able to aid in more reliable and meaningful geomorphic interpretations of DEM differences.

  9. Evaluation of Rhenium Joining Methods

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.; Morren, Sybil H.

    1995-01-01

    Coupons of rhenium-to-Cl03 flat plate joints, formed by explosive and diffusion bonding, were evaluated in a series of shear tests. Shear testing was conducted on as-received, thermally-cycled (100 cycles, from 21 to 1100 C), and thermally-aged (3 and 6 hrs at 1100 C) joint coupons. Shear tests were also conducted on joint coupons with rhenium and/or Cl03 electron beam welded tabs to simulate the joint's incorporation into a structure. Ultimate shear strength was used as a figure of merit to assess the effects of the thermal treatment and the electron beam welding of tabs on the joint coupons. All of the coupons survived thermal testing intact and without any visible degradation. Two different lots of as-received, explosively-bonded joint coupons had ultimate shear strengths of 281 and 310 MPa and 162 and 223 MPa, respectively. As-received, diffusion-bonded coupons had ultimate shear strengths of 199 and 348 MPa. For the most part, the thermally-treated and rhenium weld tab coupons had shear strengths slightly reduced or within the range of the as-received values. Coupons with Cl03 weld tabs experienced a significant reduction in shear strength. The degradation of strength appeared to be the result of a poor heat sink provided during the electron beam welding. The Cl03 base material could not dissipate heat as effectively as rhenium, leading to the formation of a brittle rhenium-niobium intermetallic.

  10. Bathymetric survey of water reservoirs in north-eastern Brazil based on TanDEM-X satellite data.

    PubMed

    Zhang, Shuping; Foerster, Saskia; Medeiros, Pedro; de Araújo, José Carlos; Motagh, Mahdi; Waske, Bjoern

    2016-11-15

    Water scarcity in the dry season is a vital problem in dryland regions such as northeastern Brazil. Water supplies in these areas often come from numerous reservoirs of various sizes. However, inventory data for these reservoirs is often limited due to the expense and time required for their acquisition via field surveys, particularly in remote areas. Remote sensing techniques provide a valuable alternative to conventional reservoir bathymetric surveys for water resource management. In this study single pass TanDEM-X data acquired in bistatic mode were used to generate digital elevation models (DEMs) in the Madalena catchment, northeastern Brazil. Validation with differential global positioning system (DGPS) data from field measurements indicated an absolute elevation accuracy of approximately 1m for the TanDEM-X derived DEMs (TDX DEMs). The DEMs derived from TanDEM-X data acquired at low water levels show significant advantages over bathymetric maps derived from field survey, particularly with regard to coverage, evenly distributed measurements and replication of reservoir shape. Furthermore, by mapping the dry reservoir bottoms with TanDEM-X data, TDX DEMs are free of emergent and submerged macrophytes, independent of water depth (e.g. >10m), water quality and even weather conditions. Thus, the method is superior to other existing bathymetric mapping approaches, particularly for inland water bodies. The proposed approach relies on (nearly) dry reservoir conditions at times of image acquisition and is thus restricted to areas that show considerable water levels variations. However, comparisons between TDX DEM and the bathymetric map derived from field surveys show that the amount of water retained during the dry phase has only marginal impact on the total water volume derivation from TDX DEM. Overall, DEMs generated from bistatic TanDEM-X data acquired in low water periods constitute a useful and efficient data source for deriving reservoir bathymetry and show

  11. Methods of Generating and Evaluating Hypertext.

    ERIC Educational Resources Information Center

    Blustein, James; Staveley, Mark S.

    2001-01-01

    Focuses on methods of generating and evaluating hypertext. Highlights include historical landmarks; nonlinearity; literary hypertext; models of hypertext; manual, automatic, and semi-automatic generation of hypertext; mathematical models for hypertext evaluation, including computing coverage and correlation; human factors in evaluation; and…

  12. Evaluation Methods for Intelligent Tutoring Systems Revisited

    ERIC Educational Resources Information Center

    Greer, Jim; Mark, Mary

    2016-01-01

    The 1993 paper in "IJAIED" on evaluation methods for Intelligent Tutoring Systems (ITS) still holds up well today. Basic evaluation techniques described in that paper remain in use. Approaches such as kappa scores, simulated learners and learning curves are refinements on past evaluation techniques. New approaches have also arisen, in…

  13. Quantitative Methods for Software Selection and Evaluation

    DTIC Science & Technology

    2006-09-01

    Quantitative Methods for Software Selection and Evaluation Michael S. Bandor September 2006 Acquisition Support Program...5 2 Evaluation Methods ...Abstract When performing a “buy” analysis and selecting a product as part of a software acquisition strategy , most organizations will consider primarily

  14. [A method of iris image quality evaluation].

    PubMed

    Murat, Hamit; Mao, Dawei; Tong, Qinye

    2006-04-01

    Iris image quality evaluation plays a very important part in iris computer recognition. An iris image quality evaluation method was introduced into this study to distinguish good image from bad image caused by pupil distortion, blurred boundary, two circles appearing not concentric, and severe occlusion by eyelids and eyelashes. The tests based on this method gave good results.

  15. Evaluation Measures and Methods: Some Intersections.

    ERIC Educational Resources Information Center

    Elliott, John

    The literature is reviewed for four combinations of evaluation measures and methods: traditional methods with traditional measures (T-Meth/T-Mea), nontraditional methods with traditional measures (N-Meth/T-Mea), traditional measures with nontraditional measures (T-Meth/N-Mea), and nontraditional methods with nontraditional measures (N-Meth/N-Mea).…

  16. Creating High Quality DEMs of Large Scale Fluvial Environments Using Structure-from-Motion

    NASA Astrophysics Data System (ADS)

    Javernick, L. A.; Brasington, J.; Caruso, B. S.; Hicks, M.; Davies, T. R.

    2012-12-01

    During the past decade, advances in survey and sensor technology have generated new opportunities to investigate the structure and dynamics of fluvial systems. Key geomatic technologies include the Global Positioning System (GPS), digital photogrammetry, LiDAR, and terrestrial laser scanning (TLS). The application of such has resulted in a profound increase in the dimensionality of topographic surveys - from cross-sections to distributed 3d point clouds and digital elevation models (DEMs). Each of these technologies have been used successfully to derive high quality DEMs of fluvial environments; however, they often require specialized and expensive equipment, such as a TLS or large format camera, bespoke platforms such as survey aircraft, and consequently make data acquisition prohibitively expensive or highly labour intensive, thus restricting the extent and frequency of surveys. Recently, advances in computer vision and image analysis have led to development of a novel photogrammetric approach that is fully automated and suitable for use with simple compact (non-metric) cameras. In this paper, we evaluate a new photogrammetric method, Structure-from-Motion (SfM), and demonstrate how this can be used to generate DEMs of comparable quality to airborne LiDAR, using consumer grade cameras at low costs. Using the SfM software PhotoScan (version 0.8.5), high quality DEMs were produced for a 1.6 km reach and a 3.3 km reach of the braided Ahuriri River, New Zealand. Photographs used for DEM creation were acquired from a helicopter flying at 600 m and 800 m above ground level using a consumer grade 10.1mega-pixel, non-metric digital camera, resulting in object space resolution imagery of 0.12 m and 0.16 m respectively. Point clouds for the two study reaches were generated using 147 and 224 photographs respectively, and were extracted automatically in an arbitrary coordinate system; RTK-GPS located ground control points (GCPs) were used to define a 3d non

  17. An assessment of TanDEM-X GlobalDEM over rural and urban areas

    NASA Astrophysics Data System (ADS)

    Koudogbo, Fifamè N.; Duro, Javier; Huber, Martin; Rudari, Roberto; Eddy, Andrew; Lucas, Richard

    2014-10-01

    Digital Elevation Model (DEM) is a key input for the development of risk management systems. Main limitation of the current available DEM is the low level of resolution. DEMs such as STRM 90m or ASTER are globally available free of charge, but offer limited use, for example, to flood modelers in most geographic areas. TanDEM-X (TerraSAR-X add-on for Digital Elevation Measurement), the first bistatic SAR can fulfil this gap. The mission objective is the generation of a consistent global digital elevation model with an unprecedented accuracy according to the HRTI-3 (High Resolution Terrain Information) specifications. The mission opens a new era in risk assessment. In the framework of ALTAMIRA INFORMATION research activities, the DIAPASON (Differential Interferometric Automated Process Applied to Survey Of Nature) processing chain has been successfully adapted to TanDEM-X CoSSC (Coregistered Slant Range Single Look Complex) data processing. In this study the capability of CoSSC data for DEM generation is investigated. Within the on-going FP7 RASOR project (Rapid Analysis and Spatialisation and Of Risk), the generated DEM are compared with Intermediate DEM derived from the TanDEM-X first global coverage. The results are presented and discussed.

  18. Safeguards Evaluation Method for evaluating vulnerability to insider threats

    SciTech Connect

    Al-Ayat, R.A.; Judd, B.R.; Renis, T.A.

    1986-01-01

    As protection of DOE facilities against outsiders increases to acceptable levels, attention is shifting toward achieving comparable protection against insiders. Since threats and protection measures for insiders are substantially different from those for outsiders, new perspectives and approaches are needed. One such approach is the Safeguards Evaluation Method. This method helps in assessing safeguards vulnerabilities to theft or diversion of special nuclear meterial (SNM) by insiders. The Safeguards Evaluation Method-Insider Threat is a simple model that can be used by safeguards and security planners to evaluate safeguards and proposed upgrades at their own facilities. The method is used to evaluate the effectiveness of safeguards in both timely detection (in time to prevent theft) and late detection (after-the-fact). The method considers the various types of potential insider adversaries working alone or in collusion with other insiders. The approach can be used for a wide variety of facilities with various quantities and forms of SNM. An Evaluation Workbook provides documentation of the baseline assessment; this simplifies subsequent on-site appraisals. Quantitative evaluation is facilitated by an accompanying computer program. The method significantly increases an evaluation team's on-site analytical capabilities, thereby producing a more thorough and accurate safeguards evaluation.

  19. Research on image scrambling degree evaluation method

    NASA Astrophysics Data System (ADS)

    Bai, Sen; Liao, Xiaofeng; Chen, Jinyu; Liu, Yijun; Wang, Xiao

    2005-12-01

    This paper discussed the evaluation problem of image scrambling degree (ISD). Inspired by the evaluation method of image texture characteristics, three new metrics for assessing objectively the ISD were proposed. The first method utilized the performance of energy concentration of Walsh transformation (WT), which took into account the properties that a good ISD measurement method should be contented. The second method used angular second moment (ASM) of image gray level co-occurrence matrix (GLCM). The third method combined the entropy of GLCM with image texture characteristic. Experimental results show that the proposed metrics are effective to assess the ISD, which correlates well with subjective assessment. Considering the computational complexity, the first evaluation method based on WT is remarkably superior to the method based on ASM and GLCM in terms of the time cost.

  20. A simplified DEM numerical simulation of vibroflotation without backfill

    NASA Astrophysics Data System (ADS)

    Jiang, M. J.; Liu, W. W.; He, J.; Sun, Y.

    2015-09-01

    Vibroflotation is one of the deep vibratory compaction techniques for ground reinforcement. This method densities the soil and improves its mechanical properties, thus helps to protect people's lives and property from geological disasters. The macro reinforcement mechanisms of vibroflotation method have been investigated by numerical simulations, laboratory and in-situ experiments. However, little attention has been paid on its micro - mechanism, which is essential to fully understand the principle of the ground reinforcement. Discrete element method (DEM), based on discrete mechanics, is more powerful to solve large deformation and failure problems. This paper investigated the macro-micro mechanism of vibroflotation without backfill under two conditions, i.e., whether or not the ground water was considered, by incorporating inter-particle rolling resistance model in the DEM simulations. Conclusions obtained are as follows: The DEM simulations incorporating rolling resistance well replicate the mechanical response of the soil assemblages and are in line with practical observations. The void ratio of the granular soil fluctuates up and down in the process of vibroflotation, and finally reduces to a lower value. It is more efficient to densify the ground without water compared to the ground with water.

  1. Application of improved extension evaluation method to water quality evaluation

    NASA Astrophysics Data System (ADS)

    Wong, Heung; Hu, Bao Qing

    2014-02-01

    The extension evaluation method (EEM) has been developed and applied to evaluate water quality. There are, however, negative values in the correlative degree (water quality grades from EEM) after the calculation. This is not natural as the correlative degree is essentially an index based on grades (rankings) of water quality by different methods, which are positive. To overcome this negativity issue, the interval clustering approach (ICA) was introduced, which is based on the grey clustering approach (GCA) and interval-valued fuzzy sets. However, the computing process and formulas of ICA are rather complex. This paper provides a novel method, i.e., improved extension evaluation method, so as to avoid negative values in the correlative degree. To demonstrate our proposed approach, the improved EEM is applied to evaluate the water quality of three different cross-sections of the Fen River, the second major branch river of the Yellow River in China and the Han Jiang River, one of the major branch rivers of the Yangtse River in China. The results of the improved evaluation method are basically the same as the official water quality. The proposed method possesses also the same merit as the EEM and ICA method, which can be applied to assess water quality when the levels of attributes are defined in terms of intervals in the water quality criteria. Existing methods are mostly applicable to data in the form of single numeric values.

  2. Improving the TanDEM-X DEM for flood modelling using flood extents from Synthetic Aperture Radar images.

    NASA Astrophysics Data System (ADS)

    Mason, David; Trigg, Mark; Garcia-Pintado, Javier; Cloke, Hannah; Neal, Jeffrey; Bates, Paul

    2015-04-01

    Many floodplains in the developed world have now been imaged with high resolution airborne LiDAR or InSAR, giving accurate DEMs that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X World DEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution SAR images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. The paper discusses an additional use of SAR flood extents to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving the DEM for future flood modelling studies in this area. The method is based on the fact that for larger rivers the water elevation changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as a sample of heights with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate height estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the refined heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must be no lower than the refined heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the

  3. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  4. Laboratory evaluation of PCBs encapsulation method

    EPA Science Inventory

    Effectiveness and limitations of the encapsulation method for reducing polychlorinated biphenyls (PCBs) concentrations in indoor air and contaminated surface have been evaluated in the laboratory study. Ten coating materials such as epoxy and polyurethane coatings, latex paint, a...

  5. Influence of the external DEM on PS-InSAR processing and results on Northern Appennine slopes

    NASA Astrophysics Data System (ADS)

    Bayer, B.; Schmidt, D. A.; Simoni, A.

    2014-12-01

    We present an InSAR analysis of slow moving landslide in the Northern Appennines, Italy, and assess the dependencies on the choice of DEM. In recent years, advanced processing techniques for synthetic aperture radar interferometry (InSAR) have been applied to measure slope movements. The persistent scatterers (PS-InSAR) approach is probably the most widely used and some codes are now available in the public domain. The Stanford method of Persistent Scatterers (StamPS) has been successfully used to analyze landslide areas. One problematic step in the processing chain is the choice of an external DEM that is used to model and remove the topographic phase in a series of interferograms in order to obtain the phase contribution caused by surface deformation. The choice is not trivial, because the PS InSAR results differ significantly in terms of PS identification, positioning, and the resulting deformation signal. We use four different DEMs to process a set of 18 ASAR (Envisat) scenes over a mountain area (~350 km2) of the Northern Appennines of Italy, using StamPS. Slow-moving landslides control the evolution of the landscape and cover approximately 30% of the territory. Our focus in this presentation is to evaluate the influence of DEM resolution and accuracy by comparing PS-InSAR results. On an areal basis, we perform a statistical analysis of displacement time-series to make the comparison. We also consider two case studies to illustrate the differences in terms of PS identification, number and estimated displacements. It is clearly shown that DEM accuracy positively influences the number of PS, while line-of-sight rates differ from case to case and can result in deformation signals that are difficult to interpret. We also take advantage of statistical tools to analyze the obtained time-series datasets for the whole study area. Results indicate differences in the style and amount of displacement that can be related to the accuracy of the employed DEM.

  6. Glacier Volume Change Estimation Using Time Series of Improved Aster Dems

    NASA Astrophysics Data System (ADS)

    Girod, Luc; Nuth, Christopher; Kääb, Andreas

    2016-06-01

    Volume change data is critical to the understanding of glacier response to climate change. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a unique source of systematic stereoscopic images covering the whole globe at 15m resolution and at a consistent quality for over 15 years. While satellite stereo sensors with significantly improved radiometric and spatial resolution are available to date, the potential of ASTER data lies in its long consistent time series that is unrivaled, though not fully exploited for change analysis due to lack of data accuracy and precision. Here, we developed an improved method for ASTER DEM generation and implemented it in the open source photogrammetric library and software suite MicMac. The method relies on the computation of a rational polynomial coefficients (RPC) model and the detection and correction of cross-track sensor jitter in order to compute DEMs. ASTER data are strongly affected by attitude jitter, mainly of approximately 4 km and 30 km wavelength, and improving the generation of ASTER DEMs requires removal of this effect. Our sensor modeling does not require ground control points and allows thus potentially for the automatic processing of large data volumes. As a proof of concept, we chose a set of glaciers with reference DEMs available to assess the quality of our measurements. We use time series of ASTER scenes from which we extracted DEMs with a ground sampling distance of 15m. Our method directly measures and accounts for the cross-track component of jitter so that the resulting DEMs are not contaminated by this process. Since the along-track component of jitter has the same direction as the stereo parallaxes, the two cannot be separated and the elevations extracted are thus contaminated by along-track jitter. Initial tests reveal no clear relation between the cross-track and along-track components so that the latter seems not to be

  7. Morphological changes at Mt. Etna detected by TanDEM-X

    NASA Astrophysics Data System (ADS)

    Wegmuller, Urs; Bonforte, Alessandro; De Beni, Emanuela; Guglielmino, Francesco; Strozzi, Tazio

    2014-05-01

    the 2012 TanDEM-X model with the 2000 SRTM DEM in order to evaluate the morphological changes occurred on the volcano during the 12 years time lap. The pixel size of SRTM-DEM is about 90 m and we resampled the TanDEM-X model to fit this value. The results show that most of the variations occurred in the Valle del Bove and on the summit crater areas. In order to compare DEMs with the same pixel size, we performed a further comparison with a 5m ground resolution optical DEM, produced in 2004 and covering only the summit area. The variations in topography have been compared with ground mapping surveys, confirming a good correlation with the spatial extension of the lava flows and of the pyroclastic deposits occurred on Mt. Etna in the last seven years. The comparison between the two DEM's (2004-2012) allows calculating the amount of volcanics emitted and to clearly monitoring the growth and development of the New South East Crater (NSEC). TanDEM-X is a useful tools to monitor volcanic area characterized by a quit frequent activity (a paroxysm every 5-10 days), such us Mt. Etna, especially if concentrated in areas not easily accessible.

  8. Assessment and Evaluation Methods for Access Services

    ERIC Educational Resources Information Center

    Long, Dallas

    2014-01-01

    This article serves as a primer for assessment and evaluation design by describing the range of methods commonly employed in library settings. Quantitative methods, such as counting and benchmarking measures, are useful for investigating the internal operations of an access services department in order to identify workflow inefficiencies or…

  9. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ∼0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ∼2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ∼0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  10. Development of a 'bare-earth' SRTM DEM product

    NASA Astrophysics Data System (ADS)

    O'Loughlin, Fiachra; Paiva, Rodrigo; Durand, Michael; Alsdorf, Douglas; Bates, Paul

    2015-04-01

    We present the methodology and results from the development of a near-global 'bare-earth' Digital Elevation Model (DEM) derived from the Shuttle Radar Topography Mission (SRTM) data. Digital Elevation Models are the most important input for hydraulic modelling, as the DEM quality governs the accuracy of the model outputs. While SRTM is currently the best near-globally [60N to 60S] available DEM, it requires adjustments to reduce the vegetation contamination and make it useful for hydrodynamic modelling over heavily vegetated areas (e.g. tropical wetlands). Unlike previous methods of accounting for vegetation contamination, which concentrated on correcting relatively small areas and usually applied a static adjustment, we account for vegetation contamination globally and apply a spatial varying correction, based on information about canopy height and density. In creating the final 'bare-earth' SRTM DEM dataset, we produced three different 'bare-earth' SRTM products. The first applies global parameters, while the second and third products apply parameters that are regionalised based on either climatic zones or vegetation types, respectively. We also tested two different canopy density proxies of different spatial resolution. Using ground elevations obtained from the ICESat GLA14 satellite altimeter, we calculate the residual errors for the raw SRTM and the three 'bare-earth' SRTM products and compare performances. The three 'bare-earth' products all show large improvements over the raw SRTM in vegetated areas with the overall mean bias reduced by between 75 and 92% from 4.94 m to 0.40 m. The overall standard deviation is reduced by between 29 and 33 % from 7.12 m to 4.80 m. As expected, improvements are higher in areas with denser vegetation. The final 'bare-earth' SRTM dataset is available at 3 arc-second with lower vertical height errors and less noise than the original SRTM product.

  11. Efficient parallel CFD-DEM simulations using OpenMP

    NASA Astrophysics Data System (ADS)

    Amritkar, Amit; Deb, Surya; Tafti, Danesh

    2014-01-01

    The paper describes parallelization strategies for the Discrete Element Method (DEM) used for simulating dense particulate systems coupled to Computational Fluid Dynamics (CFD). While the field equations of CFD are best parallelized by spatial domain decomposition techniques, the N-body particulate phase is best parallelized over the number of particles. When the two are coupled together, both modes are needed for efficient parallelization. It is shown that under these requirements, OpenMP thread based parallelization has advantages over MPI processes. Two representative examples, fairly typical of dense fluid-particulate systems are investigated, including the validation of the DEM-CFD and thermal-DEM implementation with experiments. Fluidized bed calculations are performed on beds with uniform particle loading, parallelized with MPI and OpenMP. It is shown that as the number of processing cores and the number of particles increase, the communication overhead of building ghost particle lists at processor boundaries dominates time to solution, and OpenMP which does not require this step is about twice as fast as MPI. In rotary kiln heat transfer calculations, which are characterized by spatially non-uniform particle distributions, the low overhead of switching the parallelization mode in OpenMP eliminates the load imbalances, but introduces increased overheads in fetching non-local data. In spite of this, it is shown that OpenMP is between 50-90% faster than MPI.

  12. Research on evaluation method of CMOS camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoqiang; Han, Weiqiang; Cui, Lanfang

    2014-09-01

    In some professional image application fields, we need to test some key parameters of the CMOS camera and evaluate the performance of the device. Aiming at this requirement, this paper proposes a perfect test method to evaluate the CMOS camera. Considering that the CMOS camera has a big fixed pattern noise, the method proposes the `photon transfer curve method' based on pixels to measure the gain and the read noise of the camera. The advantage of this method is that it can effectively wipe out the error brought by the response nonlinearity. Then the reason of photoelectric response nonlinearity of CMOS camera is theoretically analyzed, and the calculation formula of CMOS camera response nonlinearity is deduced. Finally, we use the proposed test method to test the CMOS camera of 2560*2048 pixels. In addition, we analyze the validity and the feasibility of this method.

  13. Uncertainty of SWAT model at different DEM resolutions in a large mountainous watershed.

    PubMed

    Zhang, Peipei; Liu, Ruimin; Bao, Yimeng; Wang, Jiawei; Yu, Wenwen; Shen, Zhenyao

    2014-04-15

    The objective of this study was to enhance understanding of the sensitivity of the SWAT model to the resolutions of Digital Elevation Models (DEMs) based on the analysis of multiple evaluation indicators. The Xiangxi River, a large tributary of Three Gorges Reservoir in China, was selected as the study area. A range of 17 DEM spatial resolutions, from 30 to 1000 m, was examined, and the annual and monthly model outputs based on each resolution were compared. The following results were obtained: (i) sediment yield was greatly affected by DEM resolution; (ii) the prediction of dissolved oxygen load was significantly affected by DEM resolutions coarser than 500 m; (iii) Total Nitrogen (TN) load was not greatly affected by the DEM resolution; (iv) Nitrate Nitrogen (NO₃-N) and Total Phosphorus (TP) loads were slightly affected by the DEM resolution; and (v) flow and Ammonia Nitrogen (NH₄-N) load were essentially unaffected by the DEM resolution. The flow and dissolved oxygen load decreased more significantly in the dry season than in the wet and normal seasons. Excluding flow and dissolved oxygen, the uncertainties of the other Hydrology/Non-point Source (H/NPS) pollution indicators were greater in the wet season than in the dry and normal seasons. Considering the temporal distribution uncertainties, the optimal DEM resolutions for flow was 30-200 m, for sediment and TP was 30-100 m, for dissolved oxygen and NO₃-N was 30-300 m, for NH₄-N was 30 to 70 m and for TN was 30-150 m.

  14. ArcGeomorphometry: A toolbox for geomorphometric characterisation of DEMs in the ArcGIS environment

    NASA Astrophysics Data System (ADS)

    Rigol-Sanchez, Juan P.; Stuart, Neil; Pulido-Bosch, Antonio

    2015-12-01

    A software tool is described for the extraction of geomorphometric land surface variables and features from Digital Elevation Models (DEMs). The ArcGeomorphometry Toolbox consists of a series of Python/Numpy processing functions, presented through an easy-to-use graphical menu for the widely used ArcGIS package. Although many GIS provide some operations for analysing DEMs, the methods are often only partially implemented and can be difficult to find and used effectively. Since the results of automated characterisation of landscapes from DEMs are influenced by the extent being considered, the resolution of the source DEM and the size of the kernel (analysis window) used for processing, we have developed a tool to allow GIS users to flexibly apply several multi-scale analysis methods to parameterise and classify a DEM into discrete land surface units. Users can control the threshold values for land surface classifications. The size of the processing kernel can be used to identify land surface features across a range of landscape scales. The pattern of land surface units from each attempt at classification is displayed immediately and can then be processed in the GIS alongside additional data that can assist with a visual assessment and comparison of a series of results. The functionality of the ArcGeomorphometry toolbox is described using an example DEM.

  15. Further evaluation of traditional icing scaling methods

    NASA Technical Reports Server (NTRS)

    Anderson, David N.

    1996-01-01

    This report provides additional evaluations of two methods to scale icing test conditions; it also describes a hybrid technique for use when scaled conditions are outside the operating envelope of the test facility. The first evaluation is of the Olsen method which can be used to scale the liquid-water content in icing tests, and the second is the AEDC (Ruff) method which is used when the test model is less than full size. Equations for both scaling methods are presented in the paper, and the methods were evaluated by performing icing tests in the NASA Lewis Icing Research Tunnel (IRT). The Olsen method was tested using 53 cm diameter NACA 0012 airfoils. Tests covered liquid-water-contents which varied by as much as a factor of 1.8. The Olsen method was generally effective in giving scale ice shapes which matched the reference shapes for these tests. The AEDC method was tested with NACA 0012 airfoils with chords from 18 cm to 53 cm. The 53 cm chord airfoils were used in reference tests, and 1/2 and 1/3 scale tests were made at conditions determined by applying the AEDC scaling method. The scale and reference airspeeds were matched in these tests. The AEDC method was found to provide fairly effective scaling for 1/2 size tests, but for 1/3 size models, scaling was generally less effective. In addition to these two scaling methods, a hybrid approach was also tested in which the Olsen method was used to adjust the LWC after size was scaled using the constant Weber number method. This approach was found to be an effective way to test when scaled conditions would otherwise be outside the capability of the test facility.

  16. Graphical methods for evaluating covering arrays

    SciTech Connect

    Kim, Youngil; Jang, Dae -Heung; Anderson-Cook, Christine M.

    2015-08-10

    Covering arrays relax the condition of orthogonal arrays by only requiring that all combination of levels be covered but not requiring that the appearance of all combination of levels be balanced. This allows for a much larger number of factors to be simultaneously considered but at the cost of poorer estimation of the factor effects. To better understand patterns between sets of columns and evaluate the degree of coverage to compare and select between alternative arrays, we suggest several new graphical methods that show some of the patterns of coverage for different designs. As a result, these graphical methods for evaluating covering arrays are illustrated with a few examples.

  17. Energy efficiency assessment methods and tools evaluation

    SciTech Connect

    McMordie, K.L.; Richman, E.E.; Keller, J.M.; Dixon, D.R.

    1994-08-01

    Many different methods of assessing the energy savings potential at federal installations, and identifying attractive projects for capital investment have been used by the different federal agencies. These methods range from high-level estimating tools to detailed design tools, both manual and software assisted. These methods have different purposes and provide results that are used for different parts of the project identification, and implementation process. Seven different assessment methods are evaluated in this study. These methods were selected by the program managers at the DoD Energy Policy Office, and DOE Federal Energy Management Program (FEMP). Each of the methods was applied to similar buildings at Bolling Air Force Base (AFB), unless it was inappropriate or the method was designed to make an installation-wide analysis, rather than focusing on particular buildings. Staff at Bolling AFB controlled the collection of data.

  18. Veterinary and human vaccine evaluation methods.

    PubMed

    Knight-Jones, T J D; Edmond, K; Gubbins, S; Paton, D J

    2014-06-07

    Despite the universal importance of vaccines, approaches to human and veterinary vaccine evaluation differ markedly. For human vaccines, vaccine efficacy is the proportion of vaccinated individuals protected by the vaccine against a defined outcome under ideal conditions, whereas for veterinary vaccines the term is used for a range of measures of vaccine protection. The evaluation of vaccine effectiveness, vaccine protection assessed under routine programme conditions, is largely limited to human vaccines. Challenge studies under controlled conditions and sero-conversion studies are widely used when evaluating veterinary vaccines, whereas human vaccines are generally evaluated in terms of protection against natural challenge assessed in trials or post-marketing observational studies. Although challenge studies provide a standardized platform on which to compare different vaccines, they do not capture the variation that occurs under field conditions. Field studies of vaccine effectiveness are needed to assess the performance of a vaccination programme. However, if vaccination is performed without central co-ordination, as is often the case for veterinary vaccines, evaluation will be limited. This paper reviews approaches to veterinary vaccine evaluation in comparison to evaluation methods used for human vaccines. Foot-and-mouth disease has been used to illustrate the veterinary approach. Recommendations are made for standardization of terminology and for rigorous evaluation of veterinary vaccines.

  19. Veterinary and human vaccine evaluation methods

    PubMed Central

    Knight-Jones, T. J. D.; Edmond, K.; Gubbins, S.; Paton, D. J.

    2014-01-01

    Despite the universal importance of vaccines, approaches to human and veterinary vaccine evaluation differ markedly. For human vaccines, vaccine efficacy is the proportion of vaccinated individuals protected by the vaccine against a defined outcome under ideal conditions, whereas for veterinary vaccines the term is used for a range of measures of vaccine protection. The evaluation of vaccine effectiveness, vaccine protection assessed under routine programme conditions, is largely limited to human vaccines. Challenge studies under controlled conditions and sero-conversion studies are widely used when evaluating veterinary vaccines, whereas human vaccines are generally evaluated in terms of protection against natural challenge assessed in trials or post-marketing observational studies. Although challenge studies provide a standardized platform on which to compare different vaccines, they do not capture the variation that occurs under field conditions. Field studies of vaccine effectiveness are needed to assess the performance of a vaccination programme. However, if vaccination is performed without central co-ordination, as is often the case for veterinary vaccines, evaluation will be limited. This paper reviews approaches to veterinary vaccine evaluation in comparison to evaluation methods used for human vaccines. Foot-and-mouth disease has been used to illustrate the veterinary approach. Recommendations are made for standardization of terminology and for rigorous evaluation of veterinary vaccines. PMID:24741009

  20. Modified method of accommodative facility evaluation

    NASA Astrophysics Data System (ADS)

    Kedzia, Boleslaw; Pieczyrak, Danuta; Tondel, Grazyna; Maples, Willis C.

    1998-10-01

    Background: Accommodative facility testing is a common test performed by optometrist to investigate an individuals skill at focusing objects at near and at far. The traditional test however harbors possible confounding variables including individual variance in reaction time, visual acuity, verbal skills and oculomotor function. We have designed a test procedure to control these variables. Methods: Children were evaluated with a traditional accommodative facility test, a test which evaluated reaction time and language skill but without accommodative (plano lenses) and a test which evaluated reaction time, language skill and accommodative facility (+/- 2.00 D lenses). Results: Speed of reaction time was 2.9 sec/cycle for the plano lenses (for dominant eye). Speed of reaction with +/- 2.00 D lenses was 6.6 sec/cycle for dominant eye and the monocular speed of accommodations was calculated to average 3.7 sec/cycle. Normative data reported in the literature was calculated to be 5.5 sec/cycle. Discussion: We found that both our method which controls for confounding variables the traditional method reveal similar findings but that individual subjects would pass one method and fail the other. This is attributed to variation in the reaction time and digit naming skill. Conclusions: Although both methods reap similar results, both methods should be employed to discover, in those who score below the expected finding, to tease out whether or not the fault falls within the reaction time/language area or whether it is a true accommodative facility dysfunction.

  1. Evaluating Sleep Disturbance: A Review of Methods

    NASA Technical Reports Server (NTRS)

    Smith, Roy M.; Oyung, R.; Gregory, K.; Miller, D.; Rosekind, M.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    There are three general approaches to evaluating sleep disturbance in regards to noise: subjective, behavioral, and physiological. Subjective methods range from standardized questionnaires and scales to self-report measures designed for specific research questions. There are two behavioral methods that provide useful sleep disturbance data. One behavioral method is actigraphy, a motion detector that provides an empirical estimate of sleep quantity and quality. An actigraph, worn on the non-dominant wrist, provides a 24-hr estimate of the rest/activity cycle. The other method involves a behavioral response, either to a specific probe or stimuli or subject initiated (e.g., indicating wakefulness). The classic, gold standard for evaluating sleep disturbance is continuous physiological monitoring of brain, eye, and muscle activity. This allows detailed distinctions of the states and stages of sleep, awakenings, and sleep continuity. Physiological delta can be obtained in controlled laboratory settings and in natural environments. Current ambulatory physiological recording equipment allows evaluation in home and work settings. These approaches will be described and the relative strengths and limitations of each method will be discussed.

  2. Evaluation of Dynamic Methods for Earthwork Assessment

    NASA Astrophysics Data System (ADS)

    Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína

    2015-05-01

    Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.

  3. [Tensiomyography as method of evaluating muscles status].

    PubMed

    Markulincić, Branko; Muraja, Sonja

    2007-01-01

    Sports results, as well as results of rehabilitation treatments are closely related to a detailed, strictly individualized programme of sports and rehabilitation training. It is vitally important to monitor and evaluate results constantly. Along with already standardized methods of evaluating neuromuscular system, such as electrodinamometry and isokinetic dinamometry on Cybex; tensiomyography (TMG) as method of assessing muscles status has been introduced. TMG is non-invasive, selective, objective method designed to measure time of activation, delay time as well as contraction time, relaxation time and intesity of muscle contraction in conditions of submaximum electrostimulation. The method is based on measuring the muscle belly enlargements by a superficialy placed precise electromagnetic sensor.TMG enables the examination of some otherwise inaccessible muscles like gluteus maximus muscle and also selective evaluation of single muscle head (for example m. vastus medialis, m. vastus lateralis and m. rectus femoris of m. quadriceps). Estimation of harmonisation between agonistic and antagonistic muscles, synergistic muscles and same muscles on left and right side of the body, is based on muscles biomechanical properties i.e. parameters, calculated from TMG response. Total harmonization (100%) is hardly ever the case, the lowest level sufficient muscle groups functionality is defined by 80% for lateral and 65% for agonistic/synergistic harmonisation. Harmonization below this level either reflects past injures, muscle adaptation or indicates increased exposure to injury.

  4. Graphical methods for evaluating covering arrays

    DOE PAGES

    Kim, Youngil; Jang, Dae -Heung; Anderson-Cook, Christine M.

    2015-08-10

    Covering arrays relax the condition of orthogonal arrays by only requiring that all combination of levels be covered but not requiring that the appearance of all combination of levels be balanced. This allows for a much larger number of factors to be simultaneously considered but at the cost of poorer estimation of the factor effects. To better understand patterns between sets of columns and evaluate the degree of coverage to compare and select between alternative arrays, we suggest several new graphical methods that show some of the patterns of coverage for different designs. As a result, these graphical methods formore » evaluating covering arrays are illustrated with a few examples.« less

  5. Field evaluation of a VOST sampling method

    SciTech Connect

    Jackson, M.D.; Johnson, L.D.; Fuerst, R.G.; McGaughey, J.F.; Bursey, J.T.; Merrill, R.G.

    1994-12-31

    The VOST (SW-846 Method 0030) specifies the use of Tenax{reg_sign} and a particular petroleum-based charcoal (SKC Lot 104, or its equivalent), that is no longer commercially available. In field evaluation studies of VOST methodology, a replacement petroleum-based charcoal has been used: candidate replacement sorbents for charcoal were studied, and Anasorb{reg_sign} 747, a carbon-based sorbent, was selected for field testing. The sampling train was modified to use only Anasorb{reg_sign} in the back tube and Tenax{reg_sign} in the two front tubes to avoid analytical difficulties associated with the analysis of the sequential bed back tube used in the standard VOST train. The standard (SW-846 Method 0030) and the modified VOST methods were evaluated at a chemical manufacturing facility using a quadruple probe system with quadruple trains. In this field test, known concentrations of the halogenated volatile organic compounds, that are listed in the Clean Air Act Amendments of 1990, Title 3, were introduced into the VOST train and the modified VOST train, using the same certified gas cylinder as a source of test compounds. Statistical tests of the comparability of methods were performed on a compound-by-compound basis. For most compounds, the VOST and modified VOST methods were found to be statistically equivalent.

  6. An evaluation of fracture analysis methods

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.

    1985-01-01

    The results of an experimental and predictive round robin on the applications of fracture analysis methods are presented. The objective of the round robin was to verify whether fracture analysis methods currently in use can or cannot predict failure loads on complex structural components containing cracks. Fracture results from tests on a number of compact specimens were used to make the predictions. The accuracy of the prediction methods was evaluated in terms of the variation in the ratio of predicted to experimental failure loads, and the predictions methods are ranked in order of minimum standard error. A range of applicability of the different methods was also considered in assessing their usefulness. For 7075-T651 aluminum alloy, the best methods were: the effective K sub R curve; the critical crack-tip opening displacement (CTOD) criterion using a finite element analysis; and the K sub R curve with the Dugdale model. For the 2024-T351 aluminum alloy, the best methods included: the two-parameter fracture criterion (TPFC); the CTOD parameter using finite element analysis; the K-curve with the Dugdale model; the deformation plasticity failure assessment diagram (DPFAD); and the effective K sub R curve with a limit load condition. For 304 stainless steel, the best methods were the limit load analysis; the CTOD criterion using finite-element analysis TPFC and DPFAD. Some sample experimental results are given in an appendix.

  7. Electromagnetic imaging methods for nondestructive evaluation applications.

    PubMed

    Deng, Yiming; Liu, Xin

    2011-01-01

    Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions.

  8. Electromagnetic Imaging Methods for Nondestructive Evaluation Applications

    PubMed Central

    Deng, Yiming; Liu, Xin

    2011-01-01

    Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions. PMID:22247693

  9. Auf der Suche nach dem Unendlichen.

    NASA Astrophysics Data System (ADS)

    Fraser, G.; Lillestøl, E.; Sellevåg, I.

    This book is a German translation by C. Ascheron and J. Urbahn, of "The search for infinity: solving the mysteries of the universe", published in 1994. Diese Buch beschreibt anschaulich die Meilensteine, die der Mensch seit der Antike auf der Suche nach dem Unendlichen erreicht und hinter sich gelassen hat. Es enthält Kurzbiographien der wichtigsten Forscher, verständlich geschriebene Texte sowie Erläuterungen der entscheidenen Fachtermini.

  10. CFD-DEM simulations of current-induced dune formation and morphological evolution

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Xiao, Heng

    2016-06-01

    Understanding the fundamental mechanisms of sediment transport, particularly those during the formation and evolution of bedforms, is of critical scientific importance and has engineering relevance. Traditional approaches of sediment transport simulations heavily rely on empirical models, which are not able to capture the physics-rich, regime-dependent behaviors of the process. With the increase of available computational resources in the past decade, CFD-DEM (computational fluid dynamics-discrete element method) has emerged as a viable high-fidelity method for the study of sediment transport. However, a comprehensive, quantitative study of the generation and migration of different sediment bed patterns using CFD-DEM is still lacking. In this work, current-induced sediment transport problems in a wide range of regimes are simulated, including 'flat bed in motion', 'small dune', 'vortex dune' and suspended transport. Simulations are performed by using SediFoam, an open-source, massively parallel CFD-DEM solver developed by the authors. This is a general-purpose solver for particle-laden flows tailed for particle transport problems. Validation tests are performed to demonstrate the capability of CFD-DEM in the full range of sediment transport regimes. Comparison of simulation results with experimental and numerical benchmark data demonstrates the merits of CFD-DEM approach. In addition, the improvements of the present simulations over existing studies using CFD-DEM are presented. The present solver gives more accurate prediction of sediment transport rate by properly accounting for the influence of particle volume fraction on the fluid flow. In summary, this work demonstrates that CFD-DEM is a promising particle-resolving approach for probing the physics of current-induced sediment transport.

  11. Spaceborne radar interferometry for coastal DEM construction

    USGS Publications Warehouse

    Hong, S.-H.; Lee, C.-W.; Won, J.-S.; Kwoun, Oh-Ig; Lu, Zhiming

    2005-01-01

    Topographic features in coastal regions including tidal flats change more significantly than landmass, and are characterized by extremely low slopes. High precision DEMs are required to monitor dynamic changes in coastal topography. It is difficult to obtain coherent interferometric SAR pairs especially over tidal flats mainly because of variation of tidal conditions. Here we focus on i) coherence of multi-pass ERS SAR interferometric pairs and ii) DEM construction from ERS-ENVISAT pairs. Coherences of multi-pass ERS interferograms were good enough to construct DEM under favorable tidal conditions. Coherence in sand dominant area was generally higher than that in muddy surface. The coarse grained coastal areas are favorable for multi-pass interferometry. Utilization of ERS-ENVISAT interferometric pairs is taken a growing interest. We carried out investigation using a cross-interferometric pair with a normal baseline of about 1.3 km, a 30 minutes temporal separation and the height sensitivity of about 6 meters. Preliminary results of ERS-ENVISAT interferometry were not successful due to baseline and unfavorable scattering conditions. ?? 2005 IEEE.

  12. Data analysis method for evaluating dialogic learning.

    PubMed

    Janhonen, S; Sarja, A

    2000-02-01

    The purpose of this paper is to introduce a new method of analysing and evaluating dialogic learning. Dialogic learning offers possibilities that have not previously been found in nursing or nursing education, although some nursing researchers have lately become interested in dialogic nursing interaction between nurses and patients. The stages of analysis of dialogic learning have been illustrated by using an example. The data for this illustration were collected by video-taping a planning process where students for a Master's degree (qualifying them to be nursing instructors in Finland) plan, implement and evaluate a course for nursing students, on the care of terminally ill patients. However, it is possible to use this method of analysis for other dialogic learning situations both in nursing practice (for example, collaborative meetings between experts and patients) and in nursing education (for example, collaborative learning situations). The focus of this method of analysis concentrates on various situations where participants in interaction see the object of discussion from various points of view. This method of analysis helps the participants in the interaction to develop their interactional skills both through an awareness of their own views, and through understanding the other participants' various views in a particular nursing situation.

  13. Assessment methods for the evaluation of vitiligo.

    PubMed

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials.

  14. In need of combined topography and bathymetry DEM

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Hilde, T.

    2003-04-01

    In many geoscience applications, digital elevation models (DEMs) are now more commonly used at different scales and greater resolution due to the great advancement in computer technology. Increasing the accuracy/resolution of the model and the coverage of the terrain (global model) has been the goal of users as mapping technology has improved and computers get faster and cheaper. The ETOPO5 (5 arc minutes spatial resolution land and seafloor model), initially developed in 1988 by Margo Edwards, then at Washington University, St. Louis, MO, has been the only global terrain model for a long time, and it is now being replaced by three new topographic and bathymetric DEMs, i.e.; the ETOPO2 (2 arc minutes spatial resolution land and seafloor model), the GTOPO30 land model with a spatial resolution of 30 arc seconds (c.a. 1km at equator) and the 'GEBCO 1-MINUTE GLOBAL BATHYMETRIC GRID' ocean floor model with a spatial resolution of 1 arc minute (c.a. 2 km at equator). These DEMs are products of projects through which compilation and reprocessing of existing and/or new datasets were made to meet user's new requirements. These ongoing efforts are valuable and support should be continued to refine and update these DEMs. On the other hand, a different approach to create a global bathymetric (seafloor) database exists. A method to estimate the seafloor topography from satellite altimetry combined with existing ships' conventional sounding data was devised and a beautiful global seafloor database created and made public by W.H. Smith and D.T. Sandwell in 1997. The big advantage of this database is the uniformity of coverage, i.e. there is no large area where depths are missing. It has a spatial resolution of 2 arc minute. Another important effort is found in making regional, not global, seafloor databases with much finer resolutions in many countries. The Japan Hydrographic Department has compiled and released a 500m-grid topography database around Japan, J-EGG500, in 1999

  15. Volcanic geomorphology using TanDEM-X

    NASA Astrophysics Data System (ADS)

    Poland, Michael; Kubanek, Julia

    2016-04-01

    Topography is perhaps the most fundamental dataset for any volcano, yet is surprisingly difficult to collect, especially during the course of an eruption. For example, photogrammetry and lidar are time-intensive and often expensive, and they cannot be employed when the surface is obscured by clouds. Ground-based surveys can operate in poor weather but have poor spatial resolution and may expose personnel to hazardous conditions. Repeat passes of synthetic aperture radar (SAR) data provide excellent spatial resolution, but topography in areas of surface change (from vegetation swaying in the wind to physical changes in the landscape) between radar passes cannot be imaged. The German Space Agency's TanDEM-X satellite system, however, solves this issue by simultaneously acquiring SAR data of the surface using a pair of orbiting satellites, thereby removing temporal change as a complicating factor in SAR-based topographic mapping. TanDEM-X measurements have demonstrated exceptional value in mapping the topography of volcanic environments in as-yet limited applications. The data provide excellent resolution (down to ~3-m pixel size) and are useful for updating topographic data at volcanoes where surface change has occurred since the most recent topographic dataset was collected. Such data can be used for applications ranging from correcting radar interferograms for topography, to modeling flow pathways in support of hazards mitigation. The most valuable contributions, however, relate to calculating volume changes related to eruptive activity. For example, limited datasets have provided critical measurements of lava dome growth and collapse at volcanoes including Merapi (Indonesia), Colima (Mexico), and Soufriere Hills (Montserrat), and of basaltic lava flow emplacement at Tolbachik (Kamchatka), Etna (Italy), and Kīlauea (Hawai`i). With topographic data spanning an eruption, it is possible to calculate eruption rates - information that might not otherwise be available

  16. Evaluation of Alternate Surface Passivation Methods (U)

    SciTech Connect

    Clark, E

    2005-05-31

    Stainless steel containers were assembled from parts passivated by four commercial vendors using three passivation methods. The performance of these containers in storing hydrogen isotope mixtures was evaluated by monitoring the composition of initially 50% H{sub 2} 50% D{sub 2} gas with time using mass spectroscopy. Commercial passivation by electropolishing appears to result in surfaces that do not catalyze hydrogen isotope exchange. This method of surface passivation shows promise for tritium service, and should be studied further and considered for use. On the other hand, nitric acid passivation and citric acid passivation may not result in surfaces that do not catalyze the isotope exchange reaction H{sub 2} + D{sub 2} {yields} 2HD. These methods should not be considered to replace the proprietary passivation processes of the two current vendors used at the Savannah River Site Tritium Facility.

  17. Cryptosporidiosis: multiattribute evaluation of six diagnostic methods.

    PubMed Central

    MacPherson, D W; McQueen, R

    1993-01-01

    Six diagnostic methods (Giemsa staining, Ziehl-Neelsen staining, auramine-rhodamine staining, Sheather's sugar flotation, an indirect immunofluorescence procedure, and a modified concentration-sugar flotation method) for the detection of Cryptosporidium spp. in stool specimens were compared on the following attributes: diagnostic yield, cost to perform each test, ease of handling, and ability to process large numbers of specimens for screening purposes by batching. A rank ordering from least desirable to most desirable was then established for each method by using the study attributes. The process of decision analysis with respect to the laboratory diagnosis of cryptosporidiosis is discussed through the application of multiattribute utility theory to the rank ordering of the study criteria. Within a specific health care setting, a diagnostic facility will be able to calculate its own utility scores for our study attributes. Multiattribute evaluation and analysis are potentially powerful tools in the allocation of resources in the laboratory. PMID:8432802

  18. A microbiological evaluation of hospital cleaning methods.

    PubMed

    White, Liza F; Dancer, Stephanie J; Robertson, Chris

    2007-08-01

    Hospital hygiene may be associated with hospital-acquired infection. This study evaluated four hospital cleaning methods: 'mop and vacuum', 'spray clean' and 'wet scrub' for floors, and one steam cleaning method for curtains. A standardised microbiological screening method was used to sample the environment before and after cleaning in order to quantify total viable counts as well as identify specific organisms. The results showed that all floor cleaning methods reduced the overall microbial load, although high counts and bacterial pathogens occasionally persisted despite cleaning. Spray cleaning gave marginally better results than traditional mopping and vacuuming. Wet scrubbing significantly reduced levels of coagulase-positive staphylococci (p = 0.03), which, in combination with routine methods, produced an effect that persisted for at least a week. Steam cleaning of curtains also reduced microbial counts (p = 0.08), but had little effect on Staphylococcus aureus and other potential pathogens. These results might help managers assess the costs of different cleaning methods against potential infection control benefits in a hospital.

  19. Evaluation method of indoor GPS measurement network

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhou, Zili; Ma, Liqun; Li, Qian

    2013-01-01

    Indoor GPS measurement network is a space coordinate measuring system, which is composed of more than one transmitter. The number and location of the transmitter determine the measurement range and accuracy of the measurement network. Therefore, how to correctly evaluate the measurement network is a key issue. By analyzing the error model of a measuring system, which is composed of two transmitters, we acquired the main cause of the measurement uncertainty. Through MATLAB simulation, we are able to get the effective measurement conditions, in order to meet specific requirement of measurement uncertainty. Meanwhile, total uncertainty of the measurement network, which is composed of measurement uncertainty, location uncertainty, receiver uncertainty and other uncertainties, is analyzed. We proposed the evaluation method based on the reference length, and at the same time, optimized the position of the reference position, posture and length, in order to meet the evaluation requirements of the entire measurement space. Finally, we simulated the measurement network for aircraft assembly in measurement space of 20m×20m×5m, and the measurement network for car assembly in measurement space of 5m×5m×2m. We evaluated the measurement network according to the above principles and estimated the uncertainty of the measurement network trough measurement bias of reference length at different locations.

  20. Quality assessment of Digital Elevation Model (DEM) in view of the Altiplano hydrological modeling

    NASA Astrophysics Data System (ADS)

    Satgé, F.; Arsen, A.; Bonnet, M.; Timouk, F.; Calmant, S.; Pilco, R.; Molina, J.; Lavado, W.; Crétaux, J.; HASM

    2013-05-01

    Topography is crucial data input for hydrological modeling but in many regions of the world, the only way to characterize topography is the use of satellite-based Digital Elevation Models (DEM). In some regions, the quality of these DEMs remains poor and induces modeling errors that may or not be compensated by model parameters tuning. In such regions, the evaluation of these data uncertainties is an important step in the modeling procedure. In this study, which focuses on the Altiplano region, we present the evaluation of the two freely available DEM. The shuttle radar topographic mission (SRTM), a product of the National Aeronautics and Space Administration (NASA) and the Advanced Space Born Thermal Emission and Reflection Global Digital Elevation Map (ASTER GDEM), data provided by the Ministry of Economy, Trade and Industry of Japan (MESI) in collaboration with the NASA, are widely used. While the first represents a resolution of 3 arc seconds (90m) the latter is 1 arc second (30m). In order to select the most reliable DEM, we compared the DEM elevation with high qualities control points elevation. Because of its large spatial coverture (track spaced of 30 km with a measure of each 172 m) and its high vertical accuracy which is less than 15 cm in good weather conditions, the Geoscience Laser Altimeter System (GLAS) on board on the Ice, Cloud and Land elevation Satellite of NASA (ICESat) represent the better solution to establish a high quality elevation database. After a quality check, more than 150 000 ICESat/GLAS measurements are suitable in terms of accuracy for the Altiplano watershed. This data base has been used to evaluate the vertical accuracy for each DEM. Regarding to the full spatial coverture; the comparison has been done for both, all kind of land coverture, range altitude and mean slope.

  1. Optimizing grid computing configuration and scheduling for geospatial analysis: An example with interpolating DEM

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei

    2011-02-01

    Many geographic analyses are very time-consuming and do not scale well when large datasets are involved. For example, the interpolation of DEMs (digital evaluation model) for large geographic areas could become a problem in practical application, especially for web applications such as terrain visualization, where a fast response is required and computational demands exceed the capacity of a traditional single processing unit conducting serial processing. Therefore, high performance and parallel computing approaches, such as grid computing, were investigated to speed up the geographic analysis algorithms, such as DEM interpolation. The key for grid computing is to configure an optimized grid computing platform for the geospatial analysis and optimally schedule the geospatial tasks within a grid platform. However, there is no research focused on this. Using DEM interoperation as an example, we report our systematic research on configuring and scheduling a high performance grid computing platform to improve the performance of geographic analyses through a systematic study on how the number of cores, processors, grid nodes, different network connections and concurrent request impact the speedup of geospatial analyses. Condor, a grid middleware, is used to schedule the DEM interpolation tasks for different grid configurations. A Kansas raster-based DEM is used for a case study and an inverse distance weighting (IDW) algorithm is used in interpolation experiments.

  2. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  3. Evaluation of Scaling Methods for Rotorcraft Icing

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Kreeger, Richard E.

    2010-01-01

    This paper reports result of an experimental study in the NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the current recommended scaling methods developed for fixed-wing unprotected surface icing applications might apply to representative rotor blades at finite angle of attack. Unlike the fixed-wing case, there is no single scaling method that has been systematically developed and evaluated for rotorcraft icing applications. In the present study, scaling was based on the modified Ruff method with scale velocity determined by maintaining constant Weber number. Models were unswept NACA 0012 wing sections. The reference model had a chord of 91.4 cm and scale model had a chord of 35.6 cm. Reference tests were conducted with velocities of 76 and 100 kt (39 and 52 m/s), droplet MVDs of 150 and 195 fun, and with stagnation-point freezing fractions of 0.3 and 0.5 at angle of attack of 0deg and 5deg. It was shown that good ice shape scaling was achieved for NACA 0012 airfoils with angle of attack lip to 5deg.

  4. Integration of SAR and DEM data: Geometrical considerations

    NASA Technical Reports Server (NTRS)

    Kropatsch, Walter G.

    1991-01-01

    General principles for integrating data from different sources are derived from the experience of registration of SAR images with digital elevation models (DEM) data. The integration consists of establishing geometrical relations between the data sets that allow us to accumulate information from both data sets for any given object point (e.g., elevation, slope, backscatter of ground cover, etc.). Since the geometries of the two data are completely different they cannot be compared on a pixel by pixel basis. The presented approach detects instances of higher level features in both data sets independently and performs the matching at the high level. Besides the efficiency of this general strategy it further allows the integration of additional knowledge sources: world knowledge and sensor characteristics are also useful sources of information. The SAR features layover and shadow can be detected easily in SAR images. An analytical method to find such regions also in a DEM needs in addition the parameters of the flight path of the SAR sensor and the range projection model. The generation of the SAR layover and shadow maps is summarized and new extensions to this method are proposed.

  5. a Near-Global Bare-Earth dem from Srtm

    NASA Astrophysics Data System (ADS)

    Gallant, J. C.; Read, A. M.

    2016-06-01

    The near-global elevation product from NASA's Shuttle Radar Topographic Mission (SRTM) has been widely used since its release in 2005 at 3 arcsecond resolution and the release of the 1 arcsecond version in late 2014 means that the full potential of the SRTM DEM can now be realised. However the routine use of SRTM for analytical purposes such as catchment hydrology, flood inundation, habitat mapping and soil mapping is still seriously impeded by the presence of artefacts in the data, primarily the offsets due to tree cover and the random noise. This paper describes the algorithms being developed to remove those offsets, based on the methods developed to produce the Australian national elevation model from SRTM data. The offsets due to trees are estimated using the GlobeLand30 (National Geomatics Center of China) and Global Forest Change (University of Maryland) products derived from Landsat, along with the ALOS PALSAR radar image data (JAXA) and the global forest canopy height map (NASA). The offsets are estimated using several processes and combined to produce a single continuous tree offset layer that is subtracted from the SRTM data. The DEM products will be made freely available on completion of the first draft product, and the assessment of that product is expected to drive further improvements to the methods.

  6. [Cost evaluation of two induced labor methods].

    PubMed

    Torres Magret, E; Sánchez Batista, R; Ramírez Pellicer, A M; Deulofeu Betancourt, A I

    1997-01-01

    Two induced labor methods, the venoclysis with oxitocin and the self-stimulation of the nipples, were comparatively evaluated in 2 groups of pregnant women (80) admitted at the Eastern Gyneco-obstetric and Teaching Hospital in Santiago de Cuba during the first semester of 1993. The following variables were calculated: drugs intake, material expenses, length of stay, and quality. A questionnaire was used to collect them. Percent and chi square were applied to these data, which were represented by tables. The self-stimulation of the nipples proved to be the most economical as regards the saving os spendable material and drugs. Hospital stay and the perinatal results connected with the type of labor and the newborn status were similar with both methods.

  7. Economic methods for multipollutant analysis and evaluation

    SciTech Connect

    Baasel, W.D.

    1985-01-01

    Since 1572, when miners' lung problems were first linked to dust, man's industrial activity has been increasingly accused of causing disease in man and harm to the environment. Since that time each compound or stream thought to be damaging has been looked at independently. If a gas stream caused the problem the bad compound compositions were reduced to an acceptable level and the problem was considered solved. What happened to substances after they were removed usually was not fully considered until the finding of an adverse effect required it. Until 1970, one usual way of getting rid of many toxic wastes was to place the, in landfills and forget about them. The discovery of sickness caused by substances escaping from the Love Canal landfill has caused a total rethinking of that procedure. This and other incidents clearly showed that taking a substance out of one stream which is discharged to the environment and placing it in another may not be an adequate solution. What must be done is to look at all streams leaving an industrial plant and devise a way to reduce the potentially harmful emissions in those streams to an acceptable level, using methods that are inexpensive. To illustrate conceptually how the environmental assessment approach is a vast improvement over the current methods, an example evaluating effluents from a coal-fired 500 MW power plant is presented. Initially only one substance in one stream is evaluated. This is sulfur oxide leaving in the flue gas.

  8. Methods and Metrics for Evaluating Environmental Dredging ...

    EPA Pesticide Factsheets

    This report documents the objectives, approach, methodologies, results, and interpretation of a collaborative research study conducted by the National Risk Management Research Laboratory (NRMRL) and the National Exposure Research laboratory (NERL) of the U.S. Environmental Protection Agency’s (U.S. EPA’s) Office of Research and Development (ORD) and the U.S. EPA’s Great Lakes National Program Office (GLNPO). The objectives of the research study were to: 1) evaluate remedy effectiveness of environmental dredging as applied to contaminated sediments in the Ashtabula River in northeastern Ohio, and 2) monitor the recovery of the surrounding ecosystem. The project was carried out over 6 years from 2006 through 2011 and consisted of the development and evaluation of methods and approaches to assess river and ecosystem conditions prior to dredging (2006), during dredging (2006 and 2007), and following dredging, both short term (2008) and long term (2009-2011). This project report summarizes and interprets the results of this 6-year study to develop and assess methods for monitoring pollutant fate and transport and ecosystem recovery through the use of biological, chemical, and physical lines of evidence (LOEs) such as: 1) comprehensive sampling of and chemical analysis of contaminants in surface, suspended, and historic sediments; 2) extensive grab and multi-level real time water sampling and analysis of contaminants in the water column; 3) sampling, chemi

  9. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO

  10. An advanced distributed automated extraction of drainage network model on high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Ye, A.; Xu, J.; Ma, F.; Deng, X.; Miao, C.; Gong, W.; Di, Z.

    2014-07-01

    A high-resolution and high-accuracy drainage network map is a prerequisite for simulating the water cycle in land surface hydrological models. The objective of this study was to develop a new automated extraction of drainage network model, which can get high-precision continuous drainage network on high-resolution DEM (Digital Elevation Model). The high-resolution DEM need too much computer resources to extract drainage network. The conventional GIS method often can not complete to calculate on high-resolution DEM of big basins, because the number of grids is too large. In order to decrease the computation time, an advanced distributed automated extraction of drainage network model (Adam) was proposed in the study. The Adam model has two features: (1) searching upward from outlet of basin instead of sink filling, (2) dividing sub-basins on low-resolution DEM, and then extracting drainage network on sub-basins of high-resolution DEM. The case study used elevation data of the Shuttle Radar Topography Mission (SRTM) at 3 arc-second resolution in Zhujiang River basin, China. The results show Adam model can dramatically reduce the computation time. The extracting drainage network was continuous and more accurate than HydroSHEDS (Hydrological data and maps based on Shuttle Elevation Derivatives at multiple Scales).

  11. Simulation of triaxial response of granular materials by modified DEM

    NASA Astrophysics Data System (ADS)

    Wang, XiaoLiang; Li, JiaChun

    2014-12-01

    A modified discrete element method (DEM) with rolling effect taken into consideration is developed to examine macroscopic behavior of granular materials in this study. Dimensional analysis is firstly performed to establish the relationship between macroscopic mechanical behavior, mesoscale contact parameters at particle level and external loading rate. It is found that only four dimensionless parameters may govern the macroscopic mechanical behavior in bulk. The numerical triaxial apparatus was used to study their influence on the mechanical behavior of granular materials. The parametric study indicates that Poisson's ratio only varies with stiffness ratio, while Young's modulus is proportional to contact modulus and grows with stiffness ratio, both of which agree with the micromechanical model. The peak friction angle is dependent on both inter-particle friction angle and rolling resistance. The dilatancy angle relies on inter-particle friction angle if rolling stiffness coefficient is sufficiently large. Finally, we have recommended a calibration procedure for cohesionless soil, which was at once applied to the simulation of Chende sand using a series of triaxial compression tests. The responses of DEM model are shown in quantitative agreement with experiments. In addition, stress-strain response of triaxial extension was also obtained by numerical triaxial extension tests.

  12. DEM modeling of flexible structures against granular material avalanches

    NASA Astrophysics Data System (ADS)

    Lambert, Stéphane; Albaba, Adel; Nicot, François; Chareyre, Bruno

    2016-04-01

    This article presents the numerical modeling of flexible structures intended to contain avalanches of granular and coarse material (e.g. rock slide, a debris slide). The numerical model is based on a discrete element method (YADE-Dem). The DEM modeling of both the flowing granular material and the flexible structure are detailed before presenting some results. The flowing material consists of a dry polydisperse granular material accounting for the non-sphericity of real materials. The flexible structure consists in a metallic net hanged on main cables, connected to the ground via anchors, on both sides of the channel, including dissipators. All these components were modeled as flexible beams or wires, with mechanical parameters defined from literature data. The simulation results are presented with the aim of investigating the variability of the structure response depending on different parameters related to the structure (inclination of the fence, with/without brakes, mesh size opening), but also to the channel (inclination). Results are then compared with existing recommendations in similar fields.

  13. DEM, tide and velocity over sulzberger ice shelf, West Antarctica

    USGS Publications Warehouse

    Baek, S.; Shum, C.K.; Lee, H.; Yi, Y.; Kwoun, Oh-Ig; Lu, Zhiming; Braun, Andreas

    2005-01-01

    Arctic and Antarctic ice sheets preserve more than 77% of the global fresh water and could raise global sea level by several meters if completely melted. Ocean tides near and under ice shelves shifts the grounding line position significantly and are one of current limitations to study glacier dynamics and mass balance. The Sulzberger ice shelf is an area of ice mass flux change in West Antarctica and has not yet been well studied. In this study, we use repeat-pass synthetic aperture radar (SAR) interferometry data from the ERS-1 and ERS-2 tandem missions for generation of a high-resolution (60-m) Digital Elevation Model (DEM) including tidal deformation detection and ice stream velocity of the Sulzberger Ice Shelf. Other satellite data such as laser altimeter measurements with fine foot-prints (70-m) from NASA's ICESat are used for validation and analyses. The resulting DEM has an accuracy of-0.57??5.88 m and is demonstrated to be useful for grounding line detection and ice mass balance studies. The deformation observed by InSAR is found to be primarily due to ocean tides and atmospheric pressure. The 2-D ice stream velocities computed agree qualitatively with previous methods on part of the Ice Shelf from passive microwave remote-sensing data (i.e., LANDSAT). ?? 2005 IEEE.

  14. Co-seismic landslide topographic analysis based on multi-temporal DEM-A case study of the Wenchuan earthquake.

    PubMed

    Ren, Zhikun; Zhang, Zhuqi; Dai, Fuchu; Yin, Jinhui; Zhang, Huiping

    2013-01-01

    Hillslope instability has been thought to be one of the most important factors for landslide susceptibility. In this study, we apply geomorphic analysis using multi-temporal DEM data and shake intensity analysis to evaluate the topographic characteristics of the landslide areas. There are many geomorphologic analysis methods such as roughness, slope aspect, which are also as useful as slope analysis. The analyses indicate that most of the co-seismic landslides occurred in regions with roughness, hillslope and slope aspect of >1.2, >30, and between 90 and 270, respectively. However, the intersection regions from the above three methods are more accurate than that derived by applying single topographic analysis method. The ground motion data indicates that the co-seismic landslides mainly occurred on the hanging wall side of Longmen Shan Thrust Belt within the up-down and horizontal peak ground acceleration (PGA) contour of 150 PGA and 200 gal, respectively. The comparisons of pre- and post-earthquake DEM data indicate that the medium roughness and slope increased, the roughest and steepest regions decreased after the Wenchuan earthquake. However, slope aspects did not even change. Our results indicate that co-seismic landslides mainly occurred at specific regions of high roughness, southward and steep sloping areas under strong ground motion. Co-seismic landslides significantly modified the local topography, especially the hillslope and roughness. The roughest relief and steepest slope are significantly smoothed; however, the medium relief and slope become rougher and steeper, respectively.

  15. Blaze-DEMGPU: Modular high performance DEM framework for the GPU architecture

    NASA Astrophysics Data System (ADS)

    Govender, Nicolin; Wilke, Daniel N.; Kok, Schalk

    Blaze-DEMGPU is a modular GPU based discrete element method (DEM) framework that supports polyhedral shaped particles. The high level performance is attributed to the light weight and Single Instruction Multiple Data (SIMD) that the GPU architecture offers. Blaze-DEMGPU offers suitable algorithms to conduct DEM simulations on the GPU and these algorithms can be extended and modified. Since a large number of scientific simulations are particle based, many of the algorithms and strategies for GPU implementation present in Blaze-DEMGPU can be applied to other fields. Blaze-DEMGPU will make it easier for new researchers to use high performance GPU computing as well as stimulate wider GPU research efforts by the DEM community.

  16. GPS-Based Precision Baseline Reconstruction for the TanDEM-X SAR-Formation

    NASA Technical Reports Server (NTRS)

    Montenbruck, O.; vanBarneveld, P. W. L.; Yoon, Y.; Visser, P. N. A. M.

    2007-01-01

    The TanDEM-X formation employs two separate spacecraft to collect interferometric Synthetic Aperture Radar (SAR) measurements over baselines of about 1 km. These will allow the generation ofa global Digital Elevation Model (DEM) with an relative vertical accuracy of 2-4 m and a 10 m ground resolution. As part of the ground processing, the separation of the SAR antennas at the time of each data take must be reconstructed with a 1 mm accuracy using measurements from two geodetic grade GPS receivers. The paper discusses the TanDEM-X mission as well as the methods employed for determining the interferometric baseline with utmost precision. Measurements collected during the close fly-by of the two GRACE satellites serve as a reference case to illustrate the processing concept, expected accuracy and quality control strategies.

  17. Methods for incomplete Bessel function evaluation

    NASA Astrophysics Data System (ADS)

    Harris, Frank E.; Fripiat, J. G.

    Presented here are detailed methods for evaluating the incomplete Bessel functions arising when Gaussian-type orbitals are used for systems periodic in one spatial dimension. The scheme is designed to yield these incomplete Bessel functions with an absolute accuracy of ±1 × 10-10, for the range of integer orders 0 ≤ n ≤ 12 [a range sufficient for a basis whose members have angular momenta of up to three units (s, p, d, or f atomic functions)]. To reach this accuracy level within acceptable computation times, new rational approximations were developed to compute the special functions involved, namely, the exponential integral E1(x) and the modified Bessel functions K0(x) and K1(x), to absolute accuracy ±1 × 10-15.

  18. Modified risk evaluation method. Revision 1

    SciTech Connect

    Udell, C.J.; Tilden, J.A.; Toyooka, R.T.

    1993-08-01

    The purpose of this paper is to provide a structured and cost-oriented process to determine risks associated with nuclear material and other security interests. Financial loss is a continuing concern for US Department of Energy contractors. In this paper risk is equated with uncertainty of cost impacts to material assets or human resources. The concept provides a method for assessing the effectiveness of an integrated protection system, which includes operations, safety, emergency preparedness, and safeguards and security. The concept is suitable for application to sabotage evaluations. The protection of assets is based on risk associated with cost impacts to assets and the potential for undesirable events. This will allow managers to establish protection priorities in terms of the cost and the potential for the event, given the current level of protection.

  19. Precise Global DEM Generation by ALOS PRISM

    NASA Astrophysics Data System (ADS)

    Tadono, T.; Ishida, H.; Oda, F.; Naito, S.; Minakawa, K.; Iwamoto, H.

    2014-04-01

    The Japan Aerospace Exploration Agency (JAXA) generated the global digital elevation/surface model (DEM/DSM) and orthorectified image (ORI) using the archived data of the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) onboard the Advanced Land Observing Satellite (ALOS, nicknamed "Daichi"), which was operated from 2006 to 2011. PRISM consisted of three panchromatic radiometers that acquired along-track stereo images. It had a spatial resolution of 2.5 m in the nadir-looking radiometer and achieved global coverage, making it a suitable potential candidate for precise global DSM and ORI generation. In the past 10 years or so, JAXA has conducted the calibration of the system corrected standard products of PRISM in order to improve absolute accuracies as well as to validate the high-level products such as DSM and ORI. In this paper, we introduce an overview of the global DEM/DSM dataset generation project, including a summary of ALOS and PRISM, in addition to the global data archive status. It is also necessary to consider data processing strategies, since the processing capabilities of the level 1 standard product and the high-level products must be developed in terms of both hardware and software to achieve the project aims. The automatic DSM/ORI processing software and its test processing results are also described.

  20. Indicators and Methods for Evaluating Economic, Ecosystem ...

    EPA Pesticide Factsheets

    The U.S. Human Well-being Index (HWBI) is a composite measure that incorporates economic, environmental, and societal well-being elements through the eight domains of connection to nature, cultural fulfillment, education, health, leisure time, living standards, safety and security, and social cohesion (USEPA 2012a; Smith et al. 2013). Twenty-eight services, represented by a collection of indicators and metrics, have been identified as influencing these domains of human well-being. By taking an inventory of stocks or measuring the results of a service, a relationship function can be derived to understand how changes in the provisioning of that service can influence the HWBI. An extensive review of existing services was performed to identify current services, indicators and metrics in use. This report describes the indicators and methods we have selected to evaluate the provisioning of economic, ecosystem, and social services related to human well-being. Provide metadata and methods for calculating services provisioning scores for HWBI modeling framework

  1. Laboratory evaluation of PCBs encapsulation method ...

    EPA Pesticide Factsheets

    Effectiveness and limitations of the encapsulation method for reducing polychlorinated biphenyls (PCBs) concentrations in indoor air and contaminated surface have been evaluated in the laboratory study. Ten coating materials such as epoxy and polyurethane coatings, latex paint, and petroleum-based paint were tested in small environmental chambers to rank the encapsulants by their resistance to PCB sorption and estimate the key parameters required by a barrier model. Wipe samples were collected from PCB contaminated surface encapsulated with the coating materials to rank the encapsulants by their resistance to PCB migration from the source. A barrier model was used to calculate the PCB concentrations in the sources and the encapsulant layers, and at the exposed surfaces of the encapsulant and in the room air at different times. The performance of the encapsulants was ranked by those concentrations and PCB percent reductions. Overall, the three epoxy coatings performed better than the other coatings. Both the experimental results and the mathematical modeling showed that selecting proper encapsulants can effectively reduce the PCB concentrations at the exposed surfaces. The encapsulation method is most effective for contaminated surfaces that contain low levels of PCBs. This study answers some of these questions by using a combination of laboratory testing and mathematical modeling. The results should be useful to mitigation engineers, building owners and managers

  2. Evaluation of methods to assess physical activity

    NASA Astrophysics Data System (ADS)

    Leenders, Nicole Y. J. M.

    Epidemiological evidence has accumulated that demonstrates that the amount of physical activity-related energy expenditure during a week reduces the incidence of cardiovascular disease, diabetes, obesity, and all-cause mortality. To further understand the amount of daily physical activity and related energy expenditure that are necessary to maintain or improve the functional health status and quality of life, instruments that estimate total (TDEE) and physical activity-related energy expenditure (PAEE) under free-living conditions should be determined to be valid and reliable. Without evaluation of the various methods that estimate TDEE and PAEE with the doubly labeled water (DLW) method in females there will be eventual significant limitations on assessing the efficacy of physical activity interventions on health status in this population. A triaxial accelerometer (Tritrac-R3D, (TT)), an uniaxial (Computer Science and Applications Inc., (CSA)) activity monitor, a Yamax-Digiwalker-500sp°ler , (YX-stepcounter), by measuring heart rate responses (HR method) and a 7-d Physical Activity Recall questionnaire (7-d PAR) were compared with the "criterion method" of DLW during a 7-d period in female adults. The DLW-TDEE was underestimated on average 9, 11 and 15% using 7-d PAR, HR method and TT. The underestimation of DLW-PAEE by 7-d PAR was 21% compared to 47% and 67% for TT and YX-stepcounter. Approximately 56% of the variance in DLW-PAEE*kgsp{-1} is explained by the registration of body movement with accelerometry. A larger proportion of the variance in DLW-PAEE*kgsp{-1} was explained by jointly incorporating information from the vertical and horizontal movement measured with the CSA and Tritrac-R3D (rsp2 = 0.87). Although only a small amount of variance in DLW-PAEE*kgsp{-1} is explained by the number of steps taken per day, because of its low cost and ease of use, the Yamax-stepcounter is useful in studies promoting daily walking. Thus, studies involving the

  3. An Investigation of Transgressive Deposits in Late Pleistocene Lake Bonneville using GPR and UAV-produced DEMs.

    NASA Astrophysics Data System (ADS)

    Schide, K.; Jewell, P. W.; Oviatt, C. G.; Jol, H. M.

    2015-12-01

    Lake Bonneville was the largest of the Pleistocene pluvial lakes that once filled the Great Basin of the interior western United States. Its two most prominent shorelines, Bonneville and Provo, are well documented but many of the lake's intermediate shoreline features have yet to be studied. These transgressive barriers and embankments mark short-term changes in the regional water budget and thus represent a proxy for local climate change. The internal and external structures of these features are analyzed using the following methods: ground penetrating radar, 5 meter auto-correlated DEMs, 1-meter DEMs generated from LiDAR, high-accuracy handheld GPS, and 3D imagery collected with an unmanned aerial vehicle. These methods in mapping, surveying, and imaging provide a quantitative analysis of regional sediment availability, transportation, and deposition as well as changes in wave and wind energy. These controls help define climate thresholds and rates of landscape evolution in the Great Basin during the Pleistocene that are then evaluated in the context of global climate change.

  4. International genomic evaluation methods for dairy cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background Genomic evaluations are rapidly replacing traditional evaluation systems used for dairy cattle selection. Economies of scale in genomics promote cooperation across country borders. Genomic information can be transferred across countries using simple conversion equations, by modifying mult...

  5. Multidisciplinary eHealth Survey Evaluation Methods

    ERIC Educational Resources Information Center

    Karras, Bryant T.; Tufano, James T.

    2006-01-01

    This paper describes the development process of an evaluation framework for describing and comparing web survey tools. We believe that this approach will help shape the design, development, deployment, and evaluation of population-based health interventions. A conceptual framework for describing and evaluating web survey systems will enable the…

  6. Democratizing Evaluation: Meanings and Methods from Practice.

    ERIC Educational Resources Information Center

    Ryan, Katherine E.; Johnson, Trav D.

    2000-01-01

    Uses the results of an instrumental case study to identify issues connected to evaluation participation and its representation and the role of the internal evaluator in democratic, deliberative evaluation. Identified direct participation and participation by representation, sanctioned or unsanctioned representation, and extrinsic and intrinsic…

  7. Zusatz- und Weiterqualifikation nach dem Studium

    NASA Astrophysics Data System (ADS)

    Domnick, Ivonne

    Ist der Bachelor geschafft, stellt sich die Frage nach einer Weiterqualifizierung. Neben einem Einstieg ins Berufsleben kann auch ein Masterstudium eventuell weitere entscheidende Bonuspunkte für den Lebenslauf bringen. Mit Zusatzqualifikationen aus fachfremden Bereichen wie Betriebswirtschaft oder Marketing ist es für Naturwissenschaftler leichter, den Einstieg ins Berufsleben zu schaffen. Viele Arbeitgeber sehen gerade bei Naturwissenschaftlern eine Promotion gerne. Hier sollte genau abgewogen werden, ob sie innerhalb einer bestimmten Zeitspanne zu schaffen ist. Auch nach einem Einstieg in den Job lässt sich der Doktortitel unter Umständen noch nachholen. Ebenso ist eine Weiterbildung neben dem Beruf in Teilzeit oder in einem Fernkurs möglich. Zusätzlich gibt es viele mehrwöchige oder mehrmonatige Kurse privater Anbieter, in denen man BWL-Grundkenntnisse erwerben kann.

  8. Discrete element modelling (DEM) input parameters: understanding their impact on model predictions using statistical analysis

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.

    2015-09-01

    Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.

  9. Defining optimal DEM resolutions and point densities for modelling hydrologically sensitive areas in agricultural catchments dominated by microtopography

    NASA Astrophysics Data System (ADS)

    Thomas, I. A.; Jordan, P.; Shine, O.; Fenton, O.; Mellander, P.-E.; Dunlop, P.; Murphy, P. N. C.

    2017-02-01

    Defining critical source areas (CSAs) of diffuse pollution in agricultural catchments depends upon the accurate delineation of hydrologically sensitive areas (HSAs) at highest risk of generating surface runoff pathways. In topographically complex landscapes, this delineation is constrained by digital elevation model (DEM) resolution and the influence of microtopographic features. To address this, optimal DEM resolutions and point densities for spatially modelling HSAs were investigated, for onward use in delineating CSAs. The surface runoff framework was modelled using the Topographic Wetness Index (TWI) and maps were derived from 0.25 m LiDAR DEMs (40 bare-earth points m-2), resampled 1 m and 2 m LiDAR DEMs, and a radar generated 5 m DEM. Furthermore, the resampled 1 m and 2 m LiDAR DEMs were regenerated with reduced bare-earth point densities (5, 2, 1, 0.5, 0.25 and 0.125 points m-2) to analyse effects on elevation accuracy and important microtopographic features. Results were compared to surface runoff field observations in two 10 km2 agricultural catchments for evaluation. Analysis showed that the accuracy of modelled HSAs using different thresholds (5%, 10% and 15% of the catchment area with the highest TWI values) was much higher using LiDAR data compared to the 5 m DEM (70-100% and 10-84%, respectively). This was attributed to the DEM capturing microtopographic features such as hedgerow banks, roads, tramlines and open agricultural drains, which acted as topographic barriers or channels that diverted runoff away from the hillslope scale flow direction. Furthermore, the identification of 'breakthrough' and 'delivery' points along runoff pathways where runoff and mobilised pollutants could be potentially transported between fields or delivered to the drainage channel network was much higher using LiDAR data compared to the 5 m DEM (75-100% and 0-100%, respectively). Optimal DEM resolutions of 1-2 m were identified for modelling HSAs, which balanced the need

  10. Developmental Eye Movement (DEM) Test Norms for Mandarin Chinese-Speaking Chinese Children.

    PubMed

    Xie, Yachun; Shi, Chunmei; Tong, Meiling; Zhang, Min; Li, Tingting; Xu, Yaqin; Guo, Xirong; Hong, Qin; Chi, Xia

    2016-01-01

    The Developmental Eye Movement (DEM) test is commonly used as a clinical visual-verbal ocular motor assessment tool to screen and diagnose reading problems at the onset. No established norm exists for using the DEM test with Mandarin Chinese-speaking Chinese children. This study aims to establish the normative values of the DEM test for the Mandarin Chinese-speaking population in China; it also aims to compare the values with three other published norms for English-, Spanish-, and Cantonese-speaking Chinese children. A random stratified sampling method was used to recruit children from eight kindergartens and eight primary schools in the main urban and suburban areas of Nanjing. A total of 1,425 Mandarin Chinese-speaking children aged 5 to 12 years took the DEM test in Mandarin Chinese. A digital recorder was used to record the process. All of the subjects completed a symptomatology survey, and their DEM scores were determined by a trained tester. The scores were computed using the formula in the DEM manual, except that the "vertical scores" were adjusted by taking the vertical errors into consideration. The results were compared with the three other published norms. In our subjects, a general decrease with age was observed for the four eye movement indexes: vertical score, adjusted horizontal score, ratio, and total error. For both the vertical and adjusted horizontal scores, the Mandarin Chinese-speaking children completed the tests much more quickly than the norms for English- and Spanish-speaking children. However, the same group completed the test slightly more slowly than the norms for Cantonese-speaking children. The differences in the means were significant (P<0.001) in all age groups. For several ages, the scores obtained in this study were significantly different from the reported scores of Cantonese-speaking Chinese children (P<0.005). Compared with English-speaking children, only the vertical score of the 6-year-old group, the vertical-horizontal time

  11. Extraction of Hydrological Proximity Measures from DEMs using Parallel Processing

    SciTech Connect

    Tesfa, Teklu K.; Tarboton, David G.; Watson, Daniel W.; Schreuders, Kimberly A.; Baker, Matthew M.; Wallace, Robert M.

    2011-12-01

    Land surface topography is one of the most important terrain properties which impact hydrological, geomorphological, and ecological processes active on a landscape. In our previous efforts to develop a soil depth model based upon topographic and land cover variables, we extracted a set of hydrological proximity measures (HPMs) from a Digital Elevation Model (DEM) as potential explanatory variables for soil depth. These HPMs may also have other, more general modeling applicability in hydrology, geomorphology and ecology, and so are described here from a general perspective. The HPMs we derived are variations of the distance up to ridge points (cells with no incoming flow) and variations of the distance down to stream points (cells with a contributing area greater than a threshold), following the flow path. These HPMs were computed using the D-infinity flow model that apportions flow between adjacent neighbors based on the direction of steepest downward slope on the eight triangular facets constructed in a 3 x 3 grid cell window using the center cell and each pair of adjacent neighboring grid cells in turn. The D-infinity model typically results in multiple flow paths between 2 points on the topography, with the result that distances may be computed as the minimum, maximum or average of the individual flow paths. In addition, each of the HPMs, are calculated vertically, horizontally, and along the land surface. Previously, these HPMs were calculated using recursive serial algorithms which suffered from stack overflow problems when used to process large datasets, limiting the size of DEMs that could be analyzed using that method to approximately 7000 x 7000 cells. To overcome this limitation, we developed a message passing interface (MPI) parallel approach for calculating these HPMs. The parallel algorithms of the HPMs spatially partition the input grid into stripes which are each assigned to separate processes for computation. Each of those processes then uses a

  12. Method for evaluation of laboratory craters using crater detection algorithm for digital topography data

    NASA Astrophysics Data System (ADS)

    Salamunićcar, Goran; Vinković, Dejan; Lončarić, Sven; Vučina, Damir; Pehnec, Igor; Vojković, Marin; Gomerčić, Mladen; Hercigonja, Tomislav

    In our previous work the following has been done: (1) the crater detection algorithm (CDA) based on digital elevation model (DEM) has been developed and the GT-115225 catalog has been assembled [GRS, 48 (5), in press, doi:10.1109/TGRS.2009.2037750]; and (2) the results of comparison between explosion-induced laboratory craters in stone powder surfaces and GT-115225 have been presented using depth/diameter measurements [41stLPSC, Abstract #1428]. The next step achievable using the available technology is to create 3D scans of such labo-ratory craters, in order to compare different properties with simple Martian craters. In this work, we propose a formal method for evaluation of laboratory craters, in order to provide objective, measurable and reproducible estimation of the level of achieved similarity between these laboratory and real impact craters. In the first step, the section of MOLA data for Mars (or SELENE LALT for Moon) is replaced with one or several 3D-scans of laboratory craters. Once embedment was done, the CDA can be used to find out whether this laboratory crater is similar enough to real craters, as to be recognized as a crater by the CDA. The CDA evaluation using ROC' curve represents how true detection rate (TDR=TP/(TP+FN)=TP/GT) depends on the false detection rate (FDR=FP/(TP+FP)). Using this curve, it is now possible to define the measure of similarity between laboratory and real impact craters, as TDR or FDR value, or as a distance from the bottom-right origin of the ROC' curve. With such an approach, the reproducible (formally described) method for evaluation of laboratory craters is provided.

  13. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  14. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  15. Aquifer water abundance evaluation using a fuzzy- comprehensive weighting method

    NASA Astrophysics Data System (ADS)

    Wei, Z.

    2016-08-01

    Aquifer water abundance evaluation is a highly relevant issue that has been researched for many years. Despite prior research, problems with the conventional evaluation method remain. This paper establishes an aquifer water abundance evaluation method that combines fuzzy evaluation with a comprehensive weighting method to overcome both the subjectivity and lack of conformity in determining weight by pure data analysis alone. First, this paper introduces the principle of a fuzzy-comprehensive weighting method. Second, the example of well field no. 3 (of a coalfield) is used to illustrate the method's process. The evaluation results show that this method is can more suitably meet the real requirements of aquifer water abundance assessment, leading to more precise and accurate evaluations. Ultimately, this paper provides a new method for aquifer water abundance evaluation.

  16. Lava emplacements at Shiveluch volcano (Kamchatka) from June 2011 to September 2014 observed by TanDEM-X SAR-Interferometry

    NASA Astrophysics Data System (ADS)

    Heck, Alexandra; Kubanek, Julia; Westerhaus, Malte; Gottschämmer, Ellen; Heck, Bernhard; Wenzel, Friedemann

    2016-04-01

    As part of the Ring of Fire, Shiveluch volcano is one of the largest and most active volcanoes on Kamchatka Peninsula. During the Holocene, only the southern part of the Shiveluch massive was active. Since the last Plinian eruption in 1964, the activity of Shiveluch is characterized by periods of dome growth and explosive eruptions. The recent active phase began in 1999 and continues until today. Due to the special conditions at active volcanoes, such as smoke development, danger of explosions or lava flows, as well as poor weather conditions and inaccessible area, it is difficult to observe the interaction between dome growth, dome destruction, and explosive eruptions in regular intervals. Consequently, a reconstruction of the eruption processes is hardly possible, though important for a better understanding of the eruption mechanism as well as for hazard forecast and risk assessment. A new approach is provided by the bistatic radar data acquired by the TanDEM-X satellite mission. This mission is composed of two nearly identical satellites, TerraSAR-X and TanDEM-X, flying in a close helix formation. On one hand, the radar signals penetrate clouds and partially vegetation and snow considering the average wavelength of about 3.1 cm. On the other hand, in comparison with conventional InSAR methods, the bistatic radar mode has the advantage that there are no difficulties due to temporal decorrelation. By interferometric evaluation of the simultaneously recorded SAR images, it is possible to calculate high-resolution digital elevation models (DEMs) of Shiveluch volcano and its surroundings. Furthermore, the short recurrence interval of 11 days allows to generate time series of DEMs, with which finally volumetric changes of the dome and of lava flows can be determined, as well as lava effusion rates. Here, this method is used at Shiveluch volcano based on data acquired between June 2011 and September 2014. Although Shiveluch has a fissured topography with steep slopes

  17. A quick algorithm of counting flow accumulation matrix for deriving drainage networks from a DEM

    NASA Astrophysics Data System (ADS)

    Wang, Yanping; Liu, Yonghe; Xie, Hongbo; Xiang, ZhongLin

    2011-06-01

    Computerized auto-extraction of drainage networks from Digital Elevation Model (DEM) has been widely used in hydrological modeling and relevant studies. Several essential procedures need to be implemented in eight-directional(D8) watershed delineation method, among which a problem need to be resolved is the lack of a high efficiency algorithm for quick and accurate computation of flow accumulation matrix involved in river network delineations. For the problem of depression filling, the algorithm presented by Oliver Planchon has resolved it. This study was aimed to develop a simple and quick algorithm for flow accumulation matrix computations. For this purpose, a simple and high efficiency algorithm of the time complexity of O(n) compared to the commonly used code of the time complexity of O(n2) orO(nlogn) , has been developed. Performance tests on this newly developed algorithm were conducted for different size of DEMs, and the results suggested that the algorithm has a linear time complexity with increasing sizes of DEM. The computation efficiency of this newly developed algorithm is many times higher than the commonly used code, and for a DEM of size 1000*1000, flow accumulation matrix computation can be completed within only several seconds compared with about few minutes needed by common used algorithms.

  18. A simplified DEM-CFD approach for pebble bed reactor simulations

    SciTech Connect

    Li, Y.; Ji, W.

    2012-07-01

    In pebble bed reactors (PBR's), the pebble flow and the coolant flow are coupled with each other through coolant-pebble interactions. Approaches with different fidelities have been proposed to simulate similar phenomena. Coupled Discrete Element Method-Computational Fluid Dynamics (DEM-CFD) approaches are widely studied and applied in these problems due to its good balance between efficiency and accuracy. In this work, based on the symmetry of the PBR geometry, a simplified 3D-DEM/2D-CFD approach is proposed to speed up the DEM-CFD simulation without significant loss of accuracy. Pebble flow is simulated by a full 3-D DEM, while the coolant flow field is calculated with a 2-D CFD simulation by averaging variables along the annular direction in the cylindrical geometry. Results show that this simplification can greatly enhance the efficiency for cylindrical core, which enables further inclusion of other physics such as thermal and neutronic effect in the multi-physics simulations for PBR's. (authors)

  19. San Francisco Bay-Delta bathymetric/topographic digital elevation model (DEM)

    USGS Publications Warehouse

    Fregoso, Theresa; Wang, Rueen-Fang; Ateljevich, Eli; Jaffe, Bruce E.

    2017-01-01

    A high-resolution (10-meter per pixel) digital elevation model (DEM) was created for the Sacramento-San Joaquin Delta using both bathymetry and topography data. This DEM is the result of collaborative efforts of the U.S. Geological Survey (USGS) and the California Department of Water Resources (DWR). The base of the DEM is from a 10-m DEM released in 2004 and updated in 2005 (Foxgrover and others, 2005) that used Environmental Systems Research Institute (ESRI), ArcGIS Topo to Raster module to interpolate grids from single beam bathymetric surveys collected by DWR, the Army Corp of Engineers (COE), the National Oceanic and Atmospheric Administration (NOAA), and the USGS, into a continuous surface. The Topo to Raster interpolation method was specifically designed to create hydrologically correct DEMs from point, line, and polygon data (Environmental Systems Research Institute, Inc., 2015). Elevation contour lines were digitized based on the single beam point data for control of channel morphology during the interpolation process. Checks were performed to ensure that the interpolated surfaces honored the source bathymetry, and additional contours and (or) point data were added as needed to help constrain the data. The original data were collected in the tidal datum Mean Lower or Low Water (MLLW) or the National Geodetic Vertical Datum of 1929 (NGVD29). All data were converted to NGVD29.The 2005 USGS DEM was updated by DWR, first by converting the DEM to the current modern datum of North American Vertical Datum of 1988 (NAVD88) and then by following the methodology of the USGS DEM, established for the 2005 DEM (Foxgrover and others, 2005) for adding newly collected single and multibeam bathymetric data. They then included topographic data from lidar surveys, providing the first DEM that included the land/water interface (Wang and Ateljevich, 2012).The USGS further updated and expanded the DWR DEM with the inclusion of USGS interpolated sections of single beam

  20. Validation of DEMs Derived from High Resolution SAR Data: a Case Study on Barcelona

    NASA Astrophysics Data System (ADS)

    Sefercik, U. G.; Schunert, A.; Soergel, U.; Watanabe, K.

    2012-07-01

    In recent years, Synthetic Aperture Radar (SAR) data have been widely used for scientific applications and several SAR missions were realized. The active sensor principle and the signal wavelength in the order of centimeters provide all-day and all-weather capabilities, respectively. The modern German TerraSAR-X (TSX) satellite provides high spatial resolution down to one meter. Based on such data SAR Interferometry may yield high quality digital surface models (DSMs), which includes points located on 3d objects such as vegetation, forest, and elevated man-made structures. By removing these points, digital elevation model (DEM) representing the bare ground of Earth is obtained. The primary objective of this paper is the validation of DEMs obtained from TSX SAR data covering Barcelona area, Spain, in the framework of a scientific project conducted by ISPRS Working Group VII/2 "SAR Interferometry" that aims the evaluation of DEM derived from data of modern SAR satellite sensors. Towards this purpose, a DSM was generated with 10 m grid spacing using TSX StripMap mode SAR data and converted to a DEM by filtering. The accuracy results have been presented referring the comparison with a more accurate (10 cm-1 m) digital terrain model (DTM) derived from large scale photogrammetry. The results showed that the TSX DEM is quite coherent with the topography and the accuracy is in between ±8-10 m. As another application, the persistent scatterer interferometry (PSI) was conducted using TSX data and the outcomes were compared with a 3d city model available in Google Earth, which is known to be very precise because it is based on LIDAR data. The results showed that PSI outcomes are quite coherent with reference data and the RMSZ of differences is around 2.5 m.

  1. GENERAL METHODS FOR REMEDIAL PERFORMANCE EVALUATIONS

    EPA Science Inventory

    This document was developed by an EPA-funded project to explain technical considerations and principles necessary to evaluated the performance of ground-water contamination remediations at hazardous waste sites. This is neither a "cookbook", nor an encyclopedia of recommended fi...

  2. Evaluating space transportation sensitivities with Taguchi methods

    NASA Technical Reports Server (NTRS)

    Brown, Norman S.; Patel, Saroj

    1992-01-01

    The lunar and Mars transportation system sensitivities and their effect on cost is discussed with reference to several design concepts using Taguchi analysis. The general features of the approach are outlined, and the selected Taguchi matrix (L18) is described. The modeling results are displayed in a Design of Experiments format to aid the evaluation of sensitivities.

  3. A Ranking Method for Evaluating Constructed Responses

    ERIC Educational Resources Information Center

    Attali, Yigal

    2014-01-01

    This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…

  4. Evaluation of Alternative Methods for Wastewater Disinfection

    DTIC Science & Technology

    1994-09-01

    sodium metabisulfite, and sodium bisulfite are used for dechlorinating chlorinated effluents, but sulfur dioxide is the favored candidate for...metabisulfite and sodium bisulfite are safe substitutes for sulfur dioxide and are used in most small facilities. These solid dechlorination materials are...induced toxicity to aquatic life? (TRC limits. ChlorinatedNO compounds) No Yes Evalue alternate disinfection , technologies: Dechlorination techniques

  5. Stream Morphologic Measurements from Airborne Laser Swath Mapping: Comparisons with Field Surveys, Traditional DEMs, and Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Snyder, N. P.; Schultz, L. L.

    2005-12-01

    Precise measurement of stream morphology over entire watersheds is one of the great research opportunities provided by airborne laser swath mapping (ALSM). ALSM surveys allow for rapid quantification of factors, such as channel width and gradient, that control stream hydraulic and ecologic properties. We compare measurements from digital elevation models (DEMs) derived from ALSM data collected by the National Center for Airborne Laser Mapping (NCALM) to field surveys, traditional DEMs (rasterized from topographic maps), and aerial photographs. The field site is in the northern Black Mountains in arid Death Valley National Park (California). The area is unvegetated, and therefore is excellent for testing DEM analysis methods because the ALSM data required minimal filtering, and the resulting DEM contains relatively few unphysical sinks. Algorithms contained in geographic information systems (GIS) software used to extract stream networks from DEMs yield best results where streams are steep enough for resolvable pixel-to-pixel elevation change, and channel width is on the order of pixel resolution. This presents a new challenge with ALSM-derived DEMs because the pixel size (1 m) is often an order of magnitude or more smaller than channel width. We find the longitudinal profile of Gower Gulch in the northern Black Mountains (~4 km total length) extracted using the ALSM DEM and a flow accumulation algorithm is 14% longer than a traditional 10-m DEM, and 13% longer than a field survey. These differences in length (and therefore gradient) are due to the computed channel path following small-scale topographic variations within the channel bottom that are not relevant during high flows. However, visual analysis of shaded-relief images created from high-resolution ALSM data is an excellent method for digitizing channel banks and thalweg paths. We used these lines to measure distance, elevation, and width. In Gower Gulch, the algorithm-derived profile is 10% longer than that

  6. Evaluation of temperament scoring methods for beef cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this study was to evaluate methods of temperament scoring. Crossbred (n=228) calves were evaluated for temperament by an individual evaluator at weaning by two methods of scoring: 1) pen score (1 to 5 scale, with higher scores indicating increasing degree of nervousness, aggressiven...

  7. Data Collection Methods for Evaluating Museum Programs and Exhibitions

    ERIC Educational Resources Information Center

    Nelson, Amy Crack; Cohn, Sarah

    2015-01-01

    Museums often evaluate various aspects of their audiences' experiences, be it what they learn from a program or how they react to an exhibition. Each museum program or exhibition has its own set of goals, which can drive what an evaluator studies and how an evaluation evolves. When designing an evaluation, data collection methods are purposefully…

  8. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  9. TanDEM-X DEMs and feature-tracking of Helheim and Kangerdlugssuaq glaciers in south-east Greenland

    NASA Astrophysics Data System (ADS)

    Bevan, Suzanne; Luckman, Adrian; Murray, Tavi

    2013-04-01

    We use sequences of TanDEM-X acquisitions over 'supersites' Helheim and Kangerdlugssuaq glaciers in south-east Greenland to generate interferometric digital elevation models (DEMs) and to feature-track surface displacement between image acquisitions. The high spatial resolution, day/night, and cloud-penetrating capabilities of the X-band SAR system enabled the production of more than 20 DEMs for each glacier with a spatial resolution of 8 m or better. The DEMs span the period June 2011 to March 2012, at 11-day intervals, with a few breaks. Time-lapse animations of Helheim DEMs reveal the development of troughs in surface elevation close to the front. The troughs propagate down flow and develop into the rifts from which calving takes place. On both glaciers, regions of high variance in elevation can be identified caused by the transit of crevasses. In addition, on Helheim, a 1 km wide band of high variance adjacent to the calving front may be interpreted as the response to tidal forcing of a partially floating tongue. In addition to the DEMs we will also present featured tracked high-quality surface velocity fields at a spatial resolution of 2 m coincident with the DEMs. On Helheim these velocity fields indicate a winter deceleration of less than 10% at a point 4 km behind the calving front.

  10. A DEM-based partition adjustment for the interpolation of annual cumulative temperature in China

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Li, Fei; Fu, Haiyue; Tian, Ying; Hu, Zizhi

    2007-06-01

    The spatial interpolation of meteorological elements has more important application value. The interpolation methods of air temperature data have been wildly applied in the large scale region. It has been paid more attentions that taking altitude as a variable was introduced into the interpolation models so as to improve the interpolation precision of air temperature data. In a large area, it is difficult to find the relationship between annual cumulative temperature and altitude according to the distribution of meteorological stations. Compared whit it dividing the study area, introducing interpolation models modified by DEM in the smaller region, we can availably improve the spatial interpolation precision of the annual cumulative temperature. The result shows that: Applied in the partition study area, inverse distance squared method modified by DEM can reduce complexity of spatial data analysis in the process of annual cumulative temperature interpolation. Partition interpolation methods take into account some factors that affect the interpolation results, such as the spatial distribution imbalance of the meteorological stations, altitude and region difference. The methods are fit for the interpolation analysis of the large-scale region. Compared with the tradition interpolation methods such as Kriging, Inverse distance interpolation method, etc., inverse distance squared method modified by DEM has higher interpolation precision of annual cumulative temperature in China.

  11. Development of DEM formalism to modeling the dynamic response of brittle solids

    NASA Astrophysics Data System (ADS)

    Grigoriev, Aleksandr S.; Shilko, Eugeny V.; Psakhie, Sergey G.

    2016-11-01

    The paper presents a numerical model of the response for brittle materials to dynamic mechanical loading and implementation of the model within the discrete element method (DEM) by the example of the movable cellular automaton method (MCA). Verification of the model was carried out using the numerical modeling of the uniaxial compression tests of concrete and sandstone samples at various strain rates. It is shown that the developed model is correct and adequately describes the behavior of brittle materials under dynamic loading.

  12. Reanalysis of the DEMS Nested Case-Control Study of Lung Cancer and Diesel Exhaust: Suitability for Quantitative Risk Assessment

    PubMed Central

    Crump, Kenny S; Van Landingham, Cynthia; Moolgavkar, Suresh H; McClellan, Roger

    2015-01-01

    The International Agency for Research on Cancer (IARC) in 2012 upgraded its hazard characterization of diesel engine exhaust (DEE) to “carcinogenic to humans.” The Diesel Exhaust in Miners Study (DEMS) cohort and nested case-control studies of lung cancer mortality in eight U.S. nonmetal mines were influential in IARC’s determination. We conducted a reanalysis of the DEMS case-control data to evaluate its suitability for quantitative risk assessment (QRA). Our reanalysis used conditional logistic regression and adjusted for cigarette smoking in a manner similar to the original DEMS analysis. However, we included additional estimates of DEE exposure and adjustment for radon exposure. In addition to applying three DEE exposure estimates developed by DEMS, we applied six alternative estimates. Without adjusting for radon, our results were similar to those in the original DEMS analysis: all but one of the nine DEE exposure estimates showed evidence of an association between DEE exposure and lung cancer mortality, with trend slopes differing only by about a factor of two. When exposure to radon was adjusted, the evidence for a DEE effect was greatly diminished, but was still present in some analyses that utilized the three original DEMS DEE exposure estimates. A DEE effect was not observed when the six alternative DEE exposure estimates were utilized and radon was adjusted. No consistent evidence of a DEE effect was found among miners who worked only underground. This article highlights some issues that should be addressed in any use of the DEMS data in developing a QRA for DEE. PMID:25857246

  13. The Study on Educational Technology Abilities Evaluation Method

    NASA Astrophysics Data System (ADS)

    Jing, Duan

    The traditional methods used to evaluate the test, the test did not really measure that we want to measuring things. Test results and can not serve as a basis for evaluation, so it was worth the natural result of its evaluation of weighing. This system is full use of technical means of education, based on education, psychological theory, to evaluate the object-based, evaluation tools, evaluation of secondary teachers to primary and secondary school teachers in educational technology as the goal, using a variety of evaluation of side France, from various angles established an informal evaluation system.

  14. EarthEnv-DEM90: A nearly-global, void-free, multi-scale smoothed, 90m digital elevation model from fused ASTER and SRTM data

    NASA Astrophysics Data System (ADS)

    Robinson, Natalie; Regetz, James; Guralnick, Robert P.

    2014-01-01

    A variety of DEM products are available to the public at no cost, though all are characterized by trade-offs in spatial coverage, data resolution, and quality. The absence of a high-resolution, high-quality, well-described and vetted, free, global consensus product was the impetus for the creation of a new DEM product described here, 'EarthEnv-DEM90'. This new DEM is a compilation dataset constructed via rigorous techniques by which ASTER GDEM2 and CGIAR-CSI v4.1 products were fused into a quality-enhanced, consistent grid of elevation estimates that spans ∼91% of the globe. EarthEnv-DEM90 was assembled using methods for seamlessly merging input datasets, thoroughly filling voids, and smoothing data irregularities (e.g. those caused by DEM noise) from the approximated surface. The result is a DEM product in which elevational artifacts are strongly mitigated from the input data fusion zone, substantial voids are filled in the northern-most regions of the globe, and the entire DEM exhibits reduced terrain noise. As important as the final product is a well defined methodology, along with new processing techniques and careful attention to final outputs, that extends the value and usability of the work beyond just this single product. Finally, we outline EarthEnv-DEM90 acquisition instructions and metadata availability, so that researchers can obtain this high-resolution, high-quality, nearly-global new DEM product for the study of wide-ranging global phenomena.

  15. LiDAR DEM for Slope regulations of land development in Taiwan

    NASA Astrophysics Data System (ADS)

    Liu, J.-K.; Yang, M.-S.; Wu, M.-C.; Hsu, W.-C.

    2012-04-01

    Slope gradient is a major parameter for regulating the development of slope-lands in Taiwan. According to official guidelines, only two methods can be adopted, namely the rectangular parcel method and the parcel contouring method. Both of them are manual methods using conventional analogue maps produced by photogrammetric method. As the trend of technology is in favor of adopting digital elevation models for automated production of slope maps and complete coverage of the territory of Taiwan with DEM in 40m, 5m and 1m grids have been mostly completed, it is needed to assess the difference of DEM approaches in comparison to the official approaches which is recognized as the only legal procedure until now. Thus, a 1/1000 contour map in the sloping land of suburban area of New Taipei City is selected for this study. Manual approaches are carried out using the contour lines with 2m intervals. DEM grids of 1m, 5m, and 10m are generated by LiDAR survey. It is shown that the slope maps generated by Eight Neighbors Unweighted method are comparable or even better than the conventional approaches. As the conventional approach is prone to error propagations and uncertainties, the new digital approach should be implemented and enforced in the due process of law.

  16. Performance Evaluation Methods for Assistive Robotic Technology

    NASA Astrophysics Data System (ADS)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  17. Spotlight COSMO-SkyMed DEM generation and validation

    NASA Astrophysics Data System (ADS)

    Lombardi, N.; Lorusso, R.; Milillo, G.

    2016-10-01

    This paper focuses on the generation of Digital Elevation Models (DEMs) with COSMO SkyMed Spotlight data in providing DEMs. In particular, the peculiarity of Spotlight data (affected from Doppler centroid drift) is investigated, and the use of the processing chain included in the Delft Object-oriented Radar Interferometric Software (DORIS [1]). The effects of not correctly handled Doppler drift is shown. The standard interferometric processing, without Doppler drift handling, has been applied to Spotlight image pairs, resulting in interferometric coherence loss in interferograms as we move away from scene center. So, the standard processing chain has been modified to take in account the Doppler centroid drift affecting Spotlight data and very high resolution and accuracy DEMs have been obtained. Some Spotlight image pairs have been processed and the obtained DEMs have been shown and analyzed proving the high details and product accuracy.

  18. Test methods for evaluating reformulated fuels

    SciTech Connect

    Croudace, M.C.

    1994-12-31

    The US Environmental Protection Agency (EPA) introduced regulations in the 1989 Clean Air Act Amendment governing the reformulation of gasoline and diesel fuels to improve air quality. These statutes drove the need for a fast and accurate method for analyzing product composition, especially aromatic and oxygenate content. The current method, gas chromatography, is slow, expensive, non portable, and requires a trained chemist to perform the analysis. The new mid-infrared spectroscopic method uses light to identify and quantify the different components in fuels. Each individual fuel component absorbs a specific wavelength of light depending on the molecule`s unique chemical structure. The quantity of light absorbed is proportional to the concentration of that fuel component in the mixture. The mid-infrared instrument has significant advantages; it is easy to use, rugged, portable, fully automated and cost effective. It can be used to measure multiple oxygenate or aromatic components in unknown fuel mixtures. Regulatory agencies have begun using this method in field compliance testing; petroleum refiners and marketers use it to monitor compliance, product quality and blending accuracy.

  19. Evaluation of Electrochemical Methods for Electrolyte Characterization

    NASA Technical Reports Server (NTRS)

    Heidersbach, Robert H.

    2001-01-01

    This report documents summer research efforts in an attempt to develop an electrochemical method of characterizing electrolytes. The ultimate objective of the characterization would be to determine the composition and corrosivity of Martian soil. Results are presented using potentiodynamic scans, Tafel extrapolations, and resistivity tests in a variety of water-based electrolytes.

  20. Animal Methods for Evaluating Forage Quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Numerous methods are available that employ animals in the assessment of forage quality. Some of these procedures provide information needed to address very specific goals (e.g., monitoring protein adequacy), some serve as useful contributors to the efforts to accurately predict nutritive value, wher...

  1. An entropy-based objective evaluation method for image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Fritts, Jason E.; Goldman, Sally A.

    2003-12-01

    Accurate image segmentation is important for many image, video and computer vision applications. Over the last few decades, many image segmentation methods have been proposed. However, the results of these segmentation methods are usually evaluated only visually, qualitatively, or indirectly by the effectiveness of the segmentation on the subsequent processing steps. Such methods are either subjective or tied to particular applications. They do not judge the performance of a segmentation method objectively, and cannot be used as a means to compare the performance of different segmentation techniques. A few quantitative evaluation methods have been proposed, but these early methods have been based entirely on empirical analysis and have no theoretical grounding. In this paper, we propose a novel objective segmentation evaluation method based on information theory. The new method uses entropy as the basis for measuring the uniformity of pixel characteristics (luminance is used in this paper) within a segmentation region. The evaluation method provides a relative quality score that can be used to compare different segmentations of the same image. This method can be used to compare both various parameterizations of one particular segmentation method as well as fundamentally different segmentation techniques. The results from this preliminary study indicate that the proposed evaluation method is superior to the prior quantitative segmentation evaluation techniques, and identify areas for future research in objective segmentation evaluation.

  2. Organic ion exchange resin separation methods evaluation

    SciTech Connect

    Witwer, K.S.

    1998-05-27

    This document describes testing to find effective methods to separate Organic Ion Exchange Resin (OIER) from a sludge simulant. This task supports a comprehensive strategy for treatment and processing of K-Basin sludge. The simulant to be used resembles sludge that has accumulated in the 105KE and 105KW Basins in the 1OOK area of the Hanford Site. The sludge is an accumulation of fuel element corrosion products, organic and inorganic ion exchange materials, canister gasket materials, iron and aluminum corrosion products, sand, dirt, and other minor amounts of organic matter.

  3. Development of high-resolution coastal DEMs: Seamlessly integrating bathymetric and topographic data to support coastal inundation modeling

    NASA Astrophysics Data System (ADS)

    Eakins, B. W.; Taylor, L. A.; Warnken, R. R.; Carignan, K. S.; Sharman, G. F.

    2006-12-01

    The National Geophysical Data Center (NGDC), an office of the National Oceanic and Atmospheric Administration (NOAA), is cooperating with the NOAA Pacific Marine Environmental Laboratory (PMEL), Center for Tsunami Research to develop high-resolution digital elevation models (DEMs) of combined bathymetry and topography. The coastal DEMs will be used as input for the Method of Splitting Tsunami (MOST) model developed by PMEL to simulate tsunami generation, propagation and inundation. The DEMs will also be useful in studies of coastal inundation caused by hurricane storm surge and rainfall flooding, resulting in valuable information for local planners involved in disaster preparedness. We present our methodology for creating the high-resolution coastal DEMs, typically at 1/3 arc-second (10 meters) cell size, from diverse digital datasets collected by numerous methods, in different terrestrial environments, and at various scales and resolutions; one important step is establishing the relationships between various tidal and geodetic vertical datums, which may vary over a gridding region. We also discuss problems encountered and lessons learned, using the Myrtle Beach, South Carolina DEM as an example.

  4. Adaptive smoothing of valleys in DEMs using TIN interpolation from ridgeline elevations: An application to morphotectonic aspect analysis

    NASA Astrophysics Data System (ADS)

    Jordan, Gyozo

    2007-05-01

    This paper presents a smoothing method that eliminates valleys of various Strahler-order drainage lines from a digital elevation model (DEM), thus enabling the recovery of local and regional trends in a terrain. A novel method for automated extraction of high-density channel network is developed to identify ridgelines defined as the watershed boundaries of channel segments. A DEM using TIN interpolation is calculated based on elevations of digitally extracted ridgelines. This removes first-order watersheds from the DEM. Higher levels of DEM smoothing can be achieved by the application of the method to ridgelines of higher-order channels. The advantage of the proposed smoothing method over traditional smoothing methods of moving kernel, trend and spectral methods is that it does not require pre-definition of smoothing parameters, such as kernel or trend parameters, and thus it follows topography in an adaptive way. Another advantage is that smoothing is controlled by the physical-hydrological properties of the terrain, as opposed to mathematical filters. Level of smoothing depends on ridgeline geometry and density, and the applied user-defined channel order. The method requires digital extraction of a high-density channel and ridgeline network. The advantage of the smoothing method over traditional methods is demonstrated through a case study of the Kali Basin test site in Hungary. The smoothing method is used in this study for aspect generalisation for morphotectonic investigations in a small watershed.

  5. Evaluation of an extensive speckle measurement method

    NASA Astrophysics Data System (ADS)

    Roelandt, Stijn; Meuret, Youri; Craggs, Gordon; Verschaffelt, Guy; Janssens, Peter; Thienpont, Hugo

    2012-06-01

    The introduction of lasers for projection applications is hampered by the emergence of speckle. In order to evaluate the speckle distorted image quality, it is important to devise an objective way to measure the amount of speckle. Mathematically, speckle can be described by its speckle contrast value C, which is given by the ratio between the standard deviation of the intensity fluctuations and the mean intensity. Because the measured speckle contrast strongly depends on the parameters of the measurement setup, in this paper we propose a standardized procedure to measure the amount of speckle in laser based projection systems. To obtain such a procedure, the influence of relevant measurement set-up parameters is investigated. The resulting measurement procedure consists of a single digital image sensor in combination with a camera lens. The parameters of the camera lens are chosen such that the measured speckle contrast values correspond with the subjective speckle perception of a human observer, independent of the projector's speckle reduction mechanism(s). Finally, the speckle measurement procedure was performed with different cameras and the results were compared.

  6. A new method to evaluate energy technologies

    SciTech Connect

    Shibata, Y.; Clark, D.J.

    1985-04-01

    As the world's oil reserves are rapidly depleted and numerous alternative energy technologies are proposed, a perplexing and urgent question confronts us: how can we assess these technologies in order to choose those which will deliver the most desirable consequences. This task, the evaluation of technology, is a form of policy study where the intent is to examine the broadest societal implications (technological, economic, environmental, legal, social, emotional, etc.) related to the development and deployment of existing or emerging technology. In a strategic planning process, one of the principal tools contributing to effective leadership is a carefully designed framework for guiding the discussion. The five step approach and the discussion support system by computer described in this article are a starting point for such a framework. The leader can advance group thinking by offering interpretive summaries using the computer and lead the group from one step to the next by giving transitional statements based on the five step approach. The constant challenge for him is to maintain the balance between freedom and control which makes for progress and yet does not act to stifle creative thinking.

  7. Evaluation criteria and test methods for electrochromic windows

    SciTech Connect

    Czanderna, A.W. ); Lampert, C.M. )

    1990-07-01

    Report summarizes the test methods used for evaluating electrochromic (EC) windows, and summarizes what is known about degradation of their performance, and recommends methods and procedures for advancing EC windows for buildings applications. 77 refs., 13 figs., 6 tabs.

  8. FIELD VALIDATION OF SEDIMENT TOXCITY IDENTIFCATION AND EVALUATION METHODS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  9. Household batteries: Evaluation of collection methods

    SciTech Connect

    Seeberger, D.A.

    1992-01-01

    While it is difficult to prove that a specific material is causing contamination in a landfill, tests have been conducted at waste-to-energy facilities that indicate that household batteries contribute significant amounts of heavy metals to both air emissions and ash residue. Hennepin County, MN, used a dual approach for developing and implementing a special household battery collection. Alternative collection methods were examined; test collections were conducted. The second phase examined operating and disposal policy issues. This report describes the results of the grant project, moving from a broad examination of the construction and content of batteries, to a description of the pilot collection programs, and ending with a discussion of variables affecting the cost and operation of a comprehensive battery collection program. Three out-of-state companies (PA, NY) were found that accept spent batteries; difficulties in reclaiming household batteries are discussed.

  10. Household batteries: Evaluation of collection methods

    SciTech Connect

    Seeberger, D.A.

    1992-12-31

    While it is difficult to prove that a specific material is causing contamination in a landfill, tests have been conducted at waste-to-energy facilities that indicate that household batteries contribute significant amounts of heavy metals to both air emissions and ash residue. Hennepin County, MN, used a dual approach for developing and implementing a special household battery collection. Alternative collection methods were examined; test collections were conducted. The second phase examined operating and disposal policy issues. This report describes the results of the grant project, moving from a broad examination of the construction and content of batteries, to a description of the pilot collection programs, and ending with a discussion of variables affecting the cost and operation of a comprehensive battery collection program. Three out-of-state companies (PA, NY) were found that accept spent batteries; difficulties in reclaiming household batteries are discussed.

  11. A hybrid FEM-DEM approach to the simulation of fluid flow laden with many particles

    NASA Astrophysics Data System (ADS)

    Casagrande, Marcus V. S.; Alves, José L. D.; Silva, Carlos E.; Alves, Fábio T.; Elias, Renato N.; Coutinho, Alvaro L. G. A.

    2017-04-01

    In this work we address a contribution to the study of particle laden fluid flows in scales smaller than TFM (two-fluid models). The hybrid model is based on a Lagrangian-Eulerian approach. A Lagrangian description is used for the particle system employing the discrete element method (DEM), while a fixed Eulerian mesh is used for the fluid phase modeled by the finite element method (FEM). The resulting coupled DEM-FEM model is integrated in time with a subcycling scheme. The aforementioned scheme is applied in the simulation of a seabed current to analyze which mechanisms lead to the emergence of bedload transport and sediment suspension, and also quantify the effective viscosity of the seabed in comparison with the ideal no-slip wall condition. A simulation of a salt plume falling in a fluid column is performed, comparing the main characteristics of the system with an experiment.

  12. Mixed Methods and Credibility of Evidence in Evaluation

    ERIC Educational Resources Information Center

    Mertens, Donna M.; Hesse-Biber, Sharlene

    2013-01-01

    We argue for a view of credible evidence that is multidimensional in philosophical and methodological terms. We advocate for the importance of deepening the meaning of credible evaluation practice and findings by bringing multiple philosophical and theoretical lenses to the evaluation process as a basis for the use of mixed methods in evaluation,…

  13. Iodine absorption cells quality evaluation methods

    NASA Astrophysics Data System (ADS)

    Hrabina, Jan; Zucco, Massimo; Holá, Miroslava; Šarbort, Martin; Acef, Ouali; Du-Burck, Frédéric; Lazar, Josef; Číp, Ondřej

    2016-12-01

    The absorption cells represent an unique tool for the laser frequency stabilization. They serve as irreplaceable optical frequency references in realization of high-stable laser standards and laser sources for different brands of optical measurements, including the most precise frequency and dimensional measurement systems. One of the most often used absorption media covering visible and near IR spectral range is molecular iodine. It offers rich atlas of very strong and narrow spectral transitions which allow realization of laser systems with ultimate frequency stabilities in or below 10-14 order level. One of the most often disccussed disadvantage of the iodine cells is iodine's corrosivity and sensitivity to presence of foreign substances. The impurities react with absorption media and cause spectral shifts of absorption spectra, spectral broadening of the transitions and decrease achievable signal-to-noise ratio of the detected spectra. All of these unwanted effects directly influence frequency stability of the realized laser standard and due to this fact, the quality of iodine cells must be precisely controlled. We present a comparison of traditionally used method of laser induced fluorescence (LIF) with novel technique based on hyperfine transitions linewidths measurement. The results summarize advantages and drawbacks of these techniques and give a recommendation for their practical usage.

  14. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams.

  15. Systems and methods for circuit lifetime evaluation

    NASA Technical Reports Server (NTRS)

    Heaps, Timothy L. (Inventor); Sheldon, Douglas J. (Inventor); Bowerman, Paul N. (Inventor); Everline, Chester J. (Inventor); Shalom, Eddy (Inventor); Rasmussen, Robert D. (Inventor)

    2013-01-01

    Systems and methods for estimating the lifetime of an electrical system in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes iteratively performing Worst Case Analysis (WCA) on a system design with respect to different system lifetimes using a computer to determine the lifetime at which the worst case performance of the system indicates the system will pass with zero margin or fail within a predetermined margin for error given the environment experienced by the system during its lifetime. In addition, performing WCA on a system with respect to a specific system lifetime includes identifying subcircuits within the system, performing Extreme Value Analysis (EVA) with respect to each subcircuit to determine whether the subcircuit fails EVA for the specific system lifetime, when the subcircuit passes EVA, determining that the subcircuit does not fail WCA for the specified system lifetime, when a subcircuit fails EVA performing at least one additional WCA process that provides a tighter bound on the WCA than EVA to determine whether the subcircuit fails WCA for the specified system lifetime, determining that the system passes WCA with respect to the specific system lifetime when all subcircuits pass WCA, and determining that the system fails WCA when at least one subcircuit fails WCA.

  16. Using Developmental Evaluation Methods with Communities of Practice

    ERIC Educational Resources Information Center

    van Winkelen, Christine

    2016-01-01

    Purpose: This paper aims to explore the use of developmental evaluation methods with community of practice programmes experiencing change or transition to better understand how to target support resources. Design/methodology/approach: The practical use of a number of developmental evaluation methods was explored in three organizations over a…

  17. Conceptual evaluation of population health surveillance programs: method and example.

    PubMed

    El Allaki, Farouk; Bigras-Poulin, Michel; Ravel, André

    2013-03-01

    Veterinary and public health surveillance programs can be evaluated to assess and improve the planning, implementation and effectiveness of these programs. Guidelines, protocols and methods have been developed for such evaluation. In general, they focus on a limited set of attributes (e.g., sensitivity and simplicity), that are assessed quantitatively whenever possible, otherwise qualitatively. Despite efforts at standardization, replication by different evaluators is difficult, making evaluation outcomes open to interpretation. This ultimately limits the usefulness of surveillance evaluations. At the same time, the growing demand to prove freedom from disease or pathogen, and the Sanitary and Phytosanitary Agreement and the International Health Regulations require stronger surveillance programs. We developed a method for evaluating veterinary and public health surveillance programs that is detailed, structured, transparent and based on surveillance concepts that are part of all types of surveillance programs. The proposed conceptual evaluation method comprises four steps: (1) text analysis, (2) extraction of the surveillance conceptual model, (3) comparison of the extracted surveillance conceptual model to a theoretical standard, and (4) validation interview with a surveillance program designer. This conceptual evaluation method was applied in 2005 to C-EnterNet, a new Canadian zoonotic disease surveillance program that encompasses laboratory based surveillance of enteric diseases in humans and active surveillance of the pathogens in food, water, and livestock. The theoretical standard used for evaluating C-EnterNet was a relevant existing structure called the "Population Health Surveillance Theory". Five out of 152 surveillance concepts were absent in the design of C-EnterNet. However, all of the surveillance concept relationships found in C-EnterNet were valid. The proposed method can be used to improve the design and documentation of surveillance programs. It

  18. Automatic Delineation of Sea-Cliff Limits Using Lidar-Derived High-Resolution DEMs in Southern California

    NASA Astrophysics Data System (ADS)

    Palaseanu, M.; Danielson, J.; Foxgrover, A. C.; Barnard, P.; Thatcher, C.; Brock, J. C.

    2014-12-01

    Seacliff erosion is a serious hazard with implications for coastal management, and is often estimated using successive hand digitized cliff tops or bases (toe) to assess cliff retreat. Traditionally the recession of the cliff top or cliff base is obtained from aerial photographs, topographic maps, or in situ surveys. Irrespective of how or what is measured to categorize cliff erosion, the position of the cliff top and cliff base is important. Habitually, the cliff top and base are hand digitized even when using high resolution lidar derived DEMs. Even if efforts were made to standardize and eliminate as much as possible any digitizing subjectivity, the delineation of cliffs is time consuming, and depends on the analyst's interpretation. We propose an automatic procedure to delineate the cliff top and base from high resolution bare-earth DEMs. The method is based on bare-earth high-resolution DEMs, generalized coastal shorelines and approximate measurements of distance between the shoreline and the cliff top. The method generates orthogonal transects and profiles with a minimum spacing equal to the DEM resolution and extracts for each profile xyz coordinates for cliff's top and toe, as well as second major positive and negative inflections (second top and toe) along the profile. The difference between the automated and digitized top and toe, respectively, is smaller than the DEM error margin for over 82% of the top points and 86% of the toe points along a stretch of coast in Del Mar, CA. The larger errors were due either to the failure to remove all vegetation from the bare-earth DEM or errors of interpretation during hand digitizing. The automatic method was further applied between Point Conception and Los Angeles Harbor, CA. This automatic method is repeatable, takes advantage of the bare-earth high-resolution, and is more efficient.

  19. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  20. How to Reach Evidence-Based Usability Evaluation Methods.

    PubMed

    Marcilly, Romaric; Peute, Linda

    2017-01-01

    This paper discusses how and why to build evidence-based knowledge on usability evaluation methods. At each step of building evidence, requisites and difficulties to achieve it are highlighted. Specifically, the paper presents how usability evaluation studies should be designed to allow capitalizing evidence. Reciprocally, it presents how evidence-based usability knowledge will help improve usability practice. Finally, it underlines that evaluation and evidence participate in a virtuous circle that will help improve scientific knowledge and evaluation practice.

  1. Stress analysis during slope failure from DEM simulations

    NASA Astrophysics Data System (ADS)

    Katz, O.; Morgan, J. K.

    2012-04-01

    We used Discrete Element Method (DEM) simulations to study the initiation and evolution of landsliding, with a focus on the development and propagation of the sliding plane, and on the effects of material strength on the behavior of the slope material during landsliding. Our simulated slopes were constructed of homogeneous materials, settled under gravity, bonded, and excavated to produce 70 deg slopes of 1050 m in height. Nine simulations were carried out, each using a different value of cohesions, ranging from 0.7 to 4.2 MPa (quantified through DEM direct shear simulations on representative materials). In each of our simulations, failure initiated at the foot of the slope, accompanied by disintegration of the slope material. Failure then propagated upward to the slope crest with further material disintegration. A discrete detachment surface formed below the disintegrated material. Downslope movement of the failed material (i.e. landsliding) occurred only after the failure plane intersected the upper slope face. By the end of landsliding, the disintegrated slope material formed a talus like deposit at the foot of the slope. The value of initial material cohesion influenced the nature of the landslide deposit and its dimension. Higher material strengths produced smaller landslides, as well as the occurrence of discrete landslide blocks, which originated from the shallow slopes, and became entrained within the finer talus. Stress analysis of the slope failure process clarifies how failure initiates and landsliding evolves, and further constrains the limiting failure criteria that define each simulated material. The local proximity to failure throughout the slope can be tracked during the simulation, revealing that high failure potential (high shear stress relative to mean stress) exists at the toe of the slope immediately following excavation. As material disintegrates near the toe of the slope, high tensile stresses develop in the overlying mass, causing the break

  2. A hybrid method for evaluating enterprise architecture implementation.

    PubMed

    Nikpay, Fatemeh; Ahmad, Rodina; Yin Kia, Chiam

    2017-02-01

    Enterprise Architecture (EA) implementation evaluation provides a set of methods and practices for evaluating the EA implementation artefacts within an EA implementation project. There are insufficient practices in existing EA evaluation models in terms of considering all EA functions and processes, using structured methods in developing EA implementation, employing matured practices, and using appropriate metrics to achieve proper evaluation. The aim of this research is to develop a hybrid evaluation method that supports achieving the objectives of EA implementation. To attain this aim, the first step is to identify EA implementation evaluation practices. To this end, a Systematic Literature Review (SLR) was conducted. Second, the proposed hybrid method was developed based on the foundation and information extracted from the SLR, semi-structured interviews with EA practitioners, program theory evaluation and Information Systems (ISs) evaluation. Finally, the proposed method was validated by means of a case study and expert reviews. This research provides a suitable foundation for researchers who wish to extend and continue this research topic with further analysis and exploration, and for practitioners who would like to employ an effective and lightweight evaluation method for EA projects.

  3. Development of an unresolved CFD-DEM model for the flow of viscous suspensions and its application to solid-liquid mixing

    NASA Astrophysics Data System (ADS)

    Blais, Bruno; Lassaigne, Manon; Goniva, Christoph; Fradette, Louis; Bertrand, François

    2016-08-01

    Although viscous solid-liquid mixing plays a key role in the industry, the vast majority of the literature on the mixing of suspensions is centered around the turbulent regime of operation. However, the laminar and transitional regimes face considerable challenges. In particular, it is important to know the minimum impeller speed (Njs) that guarantees the suspension of all particles. In addition, local information on the flow patterns is necessary to evaluate the quality of mixing and identify the presence of dead zones. Multiphase computational fluid dynamics (CFD) is a powerful tool that can be used to gain insight into local and macroscopic properties of mixing processes. Among the variety of numerical models available in the literature, which are reviewed in this work, unresolved CFD-DEM, which combines CFD for the fluid phase with the discrete element method (DEM) for the solid particles, is an interesting approach due to its accurate prediction of the granular dynamics and its capability to simulate large amounts of particles. In this work, the unresolved CFD-DEM method is extended to viscous solid-liquid flows. Different solid-liquid momentum coupling strategies, along with their stability criteria, are investigated and their accuracies are compared. Furthermore, it is shown that an additional sub-grid viscosity model is necessary to ensure the correct rheology of the suspensions. The proposed model is used to study solid-liquid mixing in a stirred tank equipped with a pitched blade turbine. It is validated qualitatively by comparing the particle distribution against experimental observations, and quantitatively by compairing the fraction of suspended solids with results obtained via the pressure gauge technique.

  4. Comparison of elevation derived from insar data with dem from topography map in Son Dong, Bac Giang, Viet Nam

    NASA Astrophysics Data System (ADS)

    Nguyen, Duy

    2012-07-01

    Digital Elevation Models (DEMs) are used in many applications in the context of earth sciences such as in topographic mapping, environmental modeling, rainfall-runoff studies, landslide hazard zonation, seismic source modeling, etc. During the last years multitude of scientific applications of Synthetic Aperture Radar Interferometry (InSAR) techniques have evolved. It has been shown that InSAR is an established technique of generating high quality DEMs from space borne and airborne data, and that it has advantages over other methods for the generation of large area DEM. However, the processing of InSAR data is still a challenging task. This paper describes InSAR operational steps and processing chain for DEM generation from Single Look Complex (SLC) SAR data and compare a satellite SAR estimate of surface elevation with a digital elevation model (DEM) from Topography map. The operational steps are performed in three major stages: Data Search, Data Processing, and product Validation. The Data processing stage is further divided into five steps of Data Pre-Processing, Co-registration, Interferogram generation, Phase unwrapping, and Geocoding. The Data processing steps have been tested with ERS 1/2 data using Delft Object-oriented Interferometric (DORIS) InSAR processing software. Results of the outcome of the application of the described processing steps to real data set are presented.

  5. Finding the service you need: human centered design of a Digital Interactive Social Chart in DEMentia care (DEM-DISC).

    PubMed

    van der Roest, H G; Meiland, F J M; Haaker, T; Reitsma, E; Wils, H; Jonker, C; Dröes, R M

    2008-01-01

    Community dwelling people with dementia and their informal carers experience a lot of problems. In the course of the disease process people with dementia become more dependent on others and professional help is often necessary. Many informal carers and people with dementia experience unmet needs with regard to information on the disease and on the available care and welfare offer, therefore they tend not to utilize the broad spectrum of available care and welfare services. This can have very negative consequences like unsafe situations, social isolation of the person with dementia and overburden of informal carers with consequent increased risk of illness for them. The development of a DEMentia specific Digital Interactive Social Chart (DEM-DISC) may counteract these problems. DEM-DISC is a demand oriented website for people with dementia and their carers, which is easy, accessible and provides users with customized information on healthcare and welfare services. DEM-DISC is developed according to the human centered design principles, this means that people with dementia, informal carers and healthcare professionals were involved throughout the development process. This paper describes the development of DEM-DISC from four perspectives, a domain specific content perspective, an ICT perspective, a user perspective and an organizational perspective. The aims and most important results from each perspective will be discussed. It is concluded that the human centered design was a valuable method for the development of the DEM-DISC.

  6. An interdisciplinary heuristic evaluation method for universal building design.

    PubMed

    Afacan, Yasemin; Erbug, Cigdem

    2009-07-01

    This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.

  7. Evaluation of Test Methods for Pyrotechnic Hazard Classification

    DTIC Science & Technology

    1975-03-01

    EM-CR-74051 (EA-4D01) TECHNICAL LIBRARY EVALUATION OF TEST METHODS FOR PYROTECHNIC HAZARD CLASSIFICATION by Wayne R. Wilcox March 1975 NASA...Subtitle) EVALUATION OF TEST METHODS FOR PYROTECHNIC HAZARD CLASSIFICATION 5. TYPE OF REPORT & PERIOD COVERED Technical Report September 1973...nacaaaary and Identify by block number) The hazard classification procedures of TB 700-2 are improperly applied to pyrotechnics. Forty-six test methods

  8. DEM Simulation of Particle Clogging in Fiber Filtration

    NASA Astrophysics Data System (ADS)

    Tao, Ran; Yang, Mengmeng; Li, Shuiqing

    2015-11-01

    The formation of porous particle deposits plays a crucial role in determining the efficiency of filtration process. In this work, an adhesive discrete element method (DEM), in combination with CFD, is developed to dynamically describe these porous deposit structures and the changed flow field between two parallel fibers under the periodic boundary conditions. For the first time, it is clarified that the structures of clogged particles are dependent on both the adhesion parameter (defined as the ratio of interparticle adhesion to particle inertia) and the Stokes number (as an index of impaction efficiency). The relationship between the pressure-drop gradient and the coordination number along the filtration time is explored, which can be used to quantitatively classify the different filtration regimes, i.e., clean filter stage, clogging stage and cake filtration stage. Finally, we investigate the influence of the fiber separation distance on the particle clogging behavior, which affects the collecting efficiency of the fibers significantly. The results suggest that changing the arrangement of fibers can improve the filter performance. This work has been funded by the National Key Basic Research and Development Program (2013CB228506).

  9. Dem Retrieval And Ground Motion Monitoring In China

    NASA Astrophysics Data System (ADS)

    Gatti, Guido; Perissin, Daniele; Wang, Teng; Rocca, Fabio

    2010-10-01

    This paper considers the topographic measurement and analysis basing on multi-baseline Synthetic Aperture Radar data. In 2009, the ongoing works were focused on taking advantage of Permanent Scatterers (PS) Interferometry to estimate the terrain elevation and ground motion in not urban contexts. An adapted version of the method, namely Quasi-PS (QPS) technique, has been used in order to exploit the distributed target information. One of the analyzed datasets concerns the mountainous area around Zhangbei, Hebei Province, from which a geocoded Digital Elevation Model (DEM) has been retrieved. Regarding ground motion monitoring, our attention was focalized on two different areas. The first is a small area near the Three Gorges Dam, in which ground deformations have been identified and measured. The second area regards the west part of the municipality of Shanghai, centered on a straight railway. The subsidence in that zone has been measured and the interferometric coherence of the railway has been studied, according to the hypothesis of spatial and temporal stability of this kind of target.

  10. A global vegetation corrected SRTM DEM for use in hazard modelling

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; O'Loughlin, F.; Neal, J. C.; Durand, M. T.; Alsdorf, D. E.; Paiva, R. C. D.

    2015-12-01

    We present the methodology and results from the development of a near-global 'bare-earth' Digital Elevation Model (DEM) derived from the Shuttle Radar Topography Mission (SRTM) data. Digital Elevation Models are the most important input for hazard modelling, as the DEM quality governs the accuracy of the model outputs. While SRTM is currently the best near-globally [60N to 60S] available DEM, it requires adjustments to reduce the vegetation contamination and make it useful for hazard modelling over heavily vegetated areas (e.g. tropical wetlands). Unlike previous methods of accounting for vegetation contamination, which concentrated on correcting relatively small areas and usually applied a static adjustment, we account for vegetation contamination globally and apply a spatial varying correction, based on information about canopy height and density. Our new 'Bare-Earth' SRTM DEM combines multiple remote sensing datasets, including ICESat GLA14 ground elevations, the vegetation continuous field dataset as a proxy for penetration depth of SRTM and a global vegetation height map, to remove the vegetation artefacts present in the original SRTM DEM. In creating the final 'bare-earth' SRTM DEM dataset, we produced three different 'bare-earth' SRTM products. The first applies global parameters, while the second and third products apply parameters that are regionalised based on either climatic zones or vegetation types, respectively. We also tested two different canopy density proxies of different spatial resolution. Using ground elevations obtained from the ICESat GLA14 satellite altimeter, we calculate the residual errors for the raw SRTM and the three 'bare-earth' SRTM products and compare performances. The three 'bare-earth' products all show large improvements over the raw SRTM in vegetated areas with the overall mean bias reduced by between 75 and 92% from 4.94 m to 0.40 m. The overall standard deviation is reduced by between 29 and 33 % from 7.12 m to 4.80 m. As

  11. The Use of DEM to Capture the Dynamics of the Flow of Solid Pellets in a Single Screw Extruder

    NASA Astrophysics Data System (ADS)

    Hong, He; Covas, J. A.; Gaspar-Cunha, A.

    2007-05-01

    Despite of the numerical developments on the numerical modeling of polymer plasticating single screw extrusion, the initial stages of solids conveying are still treated unsatisfactorily, a simple plug flow condition being assumed. It is well known that this produces poor predictions of relevant process parameters, e.g., output. This work reports on attempt to model the process using the Discrete Element Method (DEM) with the aim of unveiling the dynamics of the process. Using DEM each pellet is taken as a separate unit, thus predictions of flow patterns, velocity fields and degree of filling are possible. We present the algorithm and a few preliminary results.

  12. Aster Global dem Version 3, and New Aster Water Body Dataset

    NASA Astrophysics Data System (ADS)

    Abrams, M.

    2016-06-01

    In 2016, the US/Japan ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) project released Version 3 of the Global DEM (GDEM). This 30 m DEM covers the earth's surface from 82N to 82S, and improves on two earlier versions by correcting some artefacts and filling in areas of missing DEMs by the acquisition of additional data. The GDEM was produced by stereocorrelation of 2 million ASTER scenes and operation on a pixel-by-pixel basis: cloud screening; stacking data from overlapping scenes; removing outlier values, and averaging elevation values. As previously, the GDEM is packaged in ~ 23,000 1 x 1 degree tiles. Each tile has a DEM file, and a NUM file reporting the number of scenes used for each pixel, and identifying the source for fill-in data (where persistent clouds prevented computation of an elevation value). An additional data set was concurrently produced and released: the ASTER Water Body Dataset (AWBD). This is a 30 m raster product, which encodes every pixel as either lake, river, or ocean; thus providing a global inland and shore-line water body mask. Water was identified through spectral analysis algorithms and manual editing. This product was evaluated against the Shuttle Water Body Dataset (SWBD), and the Landsat-based Global Inland Water (GIW) product. The SWBD only covers the earth between about 60 degrees north and south, so it is not a global product. The GIW only delineates inland water bodies, and does not deal with ocean coastlines. All products are at 30 m postings.

  13. A 'Drift' algorithm for integrating vector polyline and DEM based on the spherical DQG

    NASA Astrophysics Data System (ADS)

    Wang, Jiaojiao; Wang, Lei; Cao, Wenmin; Zhao, Xuesheng

    2014-03-01

    The efficient integration method of vector and DEM data on a global scale is one of the important issues in the community of Digital Earth. Among the existing methods, geometry-based approach maintains the characteristics of vector data necessary for inquiry and analysis. However, the complexity of geometry-based approach, which needs lots of interpolation calculation, limits its applications greatly in the multi-source spatial data integration on a global scale. To overcome this serious deficiency, a novel 'drift' algorithm is developed based on the spherical Degenerate Quadtree Grid (DQG) on which the global DEMs data is represented. The main principle of this algorithm is that the vector node in a DQG cell can be moved to the cell corner-point without changing the visualization effects if the cell is smaller or equal to a pixel of screen. A detailed algorithm and the multi-scale operation steps are also presented. By the 'drift' algorithm, the vector polylines and DEM grids are integrated seamlessly, avoiding lots of interpolation calculating. Based on the approach described above, we have developed a computer program in platform OpenGL 3D API with VC++ language. In this experiment, USGS GTOPO30 DEM data and 1:1,000,000 DCW roads data sets in China area are selected. Tests have shown that time consumption of the 'drift' algorithm is only about 25% of that of the traditional ones, moreover, the mean error of drift operation on vector nodes can be controlled within about half a DQG cell. In the end, the conclusions and future works are also given.

  14. A Preliminary Rubric Design to Evaluate Mixed Methods Research

    ERIC Educational Resources Information Center

    Burrows, Timothy J.

    2013-01-01

    With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…

  15. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  16. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    SciTech Connect

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.

  17. Entrepreneur environment management behavior evaluation method derived from environmental economy.

    PubMed

    Zhang, Lili; Hou, Xilin; Xi, Fengru

    2013-12-01

    Evaluation system can encourage and guide entrepreneurs, and impel them to perform well in environment management. An evaluation method based on advantage structure is established. It is used to analyze entrepreneur environment management behavior in China. Entrepreneur environment management behavior evaluation index system is constructed based on empirical research. Evaluation method of entrepreneurs is put forward, from the point of objective programming-theory to alert entrepreneurs concerned to think much of it, which means to take minimized objective function as comprehensive evaluation result and identify disadvantage structure pattern. Application research shows that overall behavior of Chinese entrepreneurs environmental management are good, specially, environment strategic behavior are best, environmental management behavior are second, cultural behavior ranks last. Application results show the efficiency and feasibility of this method.

  18. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  19. 2D DEM model of sand transport with wind interaction

    NASA Astrophysics Data System (ADS)

    Oger, L.; Valance, A.

    2013-06-01

    The advance of the dunes in the desert is a threat to the life of the local people. The dunes invade houses, agricultural land and perturb the circulation on the roads. It is therefore very important to understand the mechanism of sand transport in order to fight against desertification. Saltation in which sand grains are propelled by the wind along the surface in short hops, is the primary mode of blown sand movement [1]. The saltating grains are very energetic and when impact a sand surface, they rebound and consequently eject other particles from the sand bed. The ejected grains, called reptating grains, contribute to the augmentation of the sand flux. Some of them can be promoted to the saltation motion. We use a mechanical model based on the Discrete Element Method to study successive collisions of incident energetic beads with granular packing in the context of Aeolian saltation transport. We investigate the collision process for the case where the incident bead and those from the packing have identical mechanical properties. We analyze the features of the consecutive collision processes made by the transport of the saltating disks by a wind in which its profile is obtained from the counter-interaction between air flow and grain flows. We used a molecular dynamics method known as DEM (soft Discrete Element Method) with a initial static packing of 20000 2D particles. The dilation of the upper surface due to the consecutive collisions is responsible for maintaining the flow at a given energy input due to the wind.

  20. Evaluation of two gas-dilution methods for instrument calibration

    NASA Technical Reports Server (NTRS)

    Evans, A., Jr.

    1977-01-01

    Two gas dilution methods were evaluated for use in the calibration of analytical instruments used in air pollution studies. A dual isotope fluorescence carbon monoxide analyzer was used as the transfer standard. The methods are not new but some modifications are described. The rotary injection gas dilution method was found to be more accurate than the closed loop method. Results by the two methods differed by 5 percent. This could not be accounted for by the random errors in the measurements. The methods avoid the problems associated with pressurized cylinders. Both methods have merit and have found a place in instrument calibration work.

  1. Detailed geomorphological mapping from high resolution DEM data (LiDAR, TanDEM-X): two case studies from Germany and SE Tibet

    NASA Astrophysics Data System (ADS)

    Loibl, D.

    2012-04-01

    Two major obstacles are hampering the production of high resolution geomorphological maps: the complexity of the subject that should be depicted and the enormous efforts necessary to obtain data by field work. The first factor prevented the establishment of a generally accepted map legend; the second hampered efforts to collect comprehensive sets of geomorphological data. This left geomorphologists to produce applied maps, focusing on very few layers of information and often not sticking to any of the numerous standards proposed in the second half of the 20th century. Technological progress of the recent years, especially in the fields of digital elevation models, GIS environments, and computational hardware, today offers promising opportunities to overcome the obstacles and to produce detailed geomorphological maps even for remote or inhospitable regions. The feasibility of detailed geomorphological mapping from two new sets of digital elevation data, the 1 m LiDAR DTM provided by Germany's State Surveying Authority and the upcoming TanDEM-X DEM, has been evaluated in two case studies from a low mountain range in Germany and a high mountain range in SE Tibet. The results indicate that most layers of information of classical geomorphological maps (e.g. the German GMK) can be extracted from this data at appropriate scales but that significant differences occur concerning the quality and the grades of certainty of key contents. Generally, an enhancement of the geomorphographical, especially the geomorphometrical, and a weakening of geomorphogenetical contents was observed. From these findings, theoretical, methodological, and cartographical remarks on detailed geomorphological mapping from DEM data in GIS environments were educed. As GIS environments decouple data and design and enable the geomorphologist to choose information layer combinations freely to fit research topics, a general purpose legend becomes obsolete. Yet, a unified data structure is demanded to

  2. ASTM test methods for composite characterization and evaluation

    NASA Technical Reports Server (NTRS)

    Masters, John E.

    1994-01-01

    A discussion of the American Society for Testing and Materials is given. Under the topic of composite materials characterization and evaluation, general industry practice and test methods for textile composites are presented.

  3. EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD

    EPA Science Inventory

    Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...

  4. [Validation and regulatory acceptance of alternative methods for toxicity evaluation].

    PubMed

    Ohno, Yasuo

    2004-01-01

    For regulatory acceptance of alternative methods (AMs) to animal toxicity tests, their reproducibility and relevance should be determined by intra- and inter-laboratory validation. Appropriate procedures of the validation and regulatory acceptance of AMs were recommended by OECD in 1996. According to those principles, several in vitro methods like skin corrosivity tests and phototoxicity tests were evaluated and accepted by ECVAM (European Center for the Validation of Alternative Methods), ICCVAM (The Interagency Coordinating Committee on the Validation of Alternative Methods), and OECD. Because of the difficulties in conducting inter-laboratory validation and relatively short period remained until EU's ban of animal experiments for safety evaluation of cosmetics, ECVAM and ICCVAM have recently started cooperation in validation and evaluation of AMs. It is also necessary to establish JaCVAM (Japanese Center for the Validation of AM) to contribute the issue and for the evaluation of new toxicity tests originated in Japan.

  5. Evaluation of methods of temperament scoring for beef cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Temperament can negatively affect various production traits, including live weight, ADG, DMI, conception rates and carcass weight. The objective of this research study was to evaluate temperament scoring methods in beef cattle. Crossbred (n = 228) calves were evaluated for temperament at weaning by ...

  6. Comparative study of heuristic evaluation and usability testing methods.

    PubMed

    Thyvalikakath, Thankam Paul; Monaco, Valerie; Thambuganipalle, Himabindu; Schleyer, Titus

    2009-01-01

    Usability methods, such as heuristic evaluation, cognitive walk-throughs and user testing, are increasingly used to evaluate and improve the design of clinical software applications. There is still some uncertainty, however, as to how those methods can be used to support the development process and evaluation in the most meaningful manner. In this study, we compared the results of a heuristic evaluation with those of formal user tests in order to determine which usability problems were detected by both methods. We conducted heuristic evaluation and usability testing on four major commercial dental computer-based patient records (CPRs), which together cover 80% of the market for chairside computer systems among general dentists. Both methods yielded strong evidence that the dental CPRs have significant usability problems. An average of 50% of empirically-determined usability problems were identified by the preceding heuristic evaluation. Some statements of heuristic violations were specific enough to precisely identify the actual usability problem that study participants encountered. Other violations were less specific, but still manifested themselves in usability problems and poor task outcomes. In this study, heuristic evaluation identified a significant portion of problems found during usability testing. While we make no assumptions about the generalizability of the results to other domains and software systems, heuristic evaluation may, under certain circumstances, be a useful tool to determine design problems early in the development cycle.

  7. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Phase 1

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley Multidisciplinary Design Optimization (MDO) method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of reproducible experiments. This report documents all computational experiments conducted in Phase I of the study. This report is a companion to the paper titled Initial Results of an MDO Method Evaluation Study by N. M. Alexandrov and S. Kodiyalam (AIAA-98-4884).

  8. A formative multi-method approach to evaluating training.

    PubMed

    Hayes, Holly; Scott, Victoria; Abraczinskas, Michelle; Scaccia, Jonathan; Stout, Soma; Wandersman, Abraham

    2016-10-01

    This article describes how we used a formative multi-method evaluation approach to gather real-time information about the processes of a complex, multi-day training with 24 community coalitions in the United States. The evaluation team used seven distinct, evaluation strategies to obtain evaluation data from the first Community Health Improvement Leadership Academy (CHILA) within a three-prong framework (inquiry, observation, and reflection). These methods included: comprehensive survey, rapid feedback form, learning wall, observational form, team debrief, social network analysis and critical moments reflection. The seven distinct methods allowed for both real time quality improvement during the CHILA and long term planning for the next CHILA. The methods also gave a comprehensive picture of the CHILA, which when synthesized allowed the evaluation team to assess the effectiveness of a training designed to tap into natural community strengths and accelerate health improvement. We hope that these formative evaluation methods can continue to be refined and used by others to evaluate training.

  9. Contemporary ice-elevation changes on central Chilean glaciers using SRTM1 and high-resolution DEMs

    NASA Astrophysics Data System (ADS)

    Vivero, Sebastian; MacDonell, Shelley

    2016-04-01

    Glaciers located in central Chile have undergone significant retreat in recent decades. Whilst studies have evaluated area loss of several glaciers, there are no detailed studies of volume losses. This lack of information restricts not only estimations of current and future contributions to sea level rise, but also has limited the evaluation of freshwater resource availability in the region. Recently, the Chilean Water Directorate has supported the collection of field and remotely sensed data in the region which has enabled glacier changes to be evaluated in greater detail. This study aims to compare high-resolution laser scanning DEMs acquired by the Chilean Water Directorate in April 2015 with the recently released SRTM 1 arc-second DEM (˜30 m) acquired in February 2000 to calculate geodetic mass balance changes for three glaciers in a catchment in central Chile over a 15-year period. Detailed analysis of the SRTM and laser scanning DEMs, together with the glacier outlines enable the quantification of elevation and volume changes. Glacier outlines from February 2000 were obtained using the multispectral analysis of a Landsat TM image, whereas outlines from April 2015 were digitised from high resolution glacier orthophotomosaics. Additionally, we accounted for radar penetration into snow and/or ice by evaluating elevation differences between SRTM C-and X-bands, as well as mis-registration between SRTM DEM and the high-resolution DEMs. Over the period all glaciers show similar ice wastage in the order of 0.03 km3 for the debris-covered and non-covered glaciers. However, whilst on the non-covered glaciers mass loss is largely related to elevation and the addition of surface sediment, on the debris-covered glacier, losses are related to the development of thermokarst features. By analysing the DEM in conjunction with Landsat images, we have detected changes in the sediment cover of the non-covered glaciers, which is likely to change the behaviour of the surface mass

  10. Discrete Element Modeling (DEM) of Triboelectrically Charged Particles: Revised Experiments

    NASA Technical Reports Server (NTRS)

    Hogue, Michael D.; Calle, Carlos I.; Curry, D. R.; Weitzman, P. S.

    2008-01-01

    In a previous work, the addition of basic screened Coulombic electrostatic forces to an existing commercial discrete element modeling (DEM) software was reported. Triboelectric experiments were performed to charge glass spheres rolling on inclined planes of various materials. Charge generation constants and the Q/m ratios for the test materials were calculated from the experimental data and compared to the simulation output of the DEM software. In this paper, we will discuss new values of the charge generation constants calculated from improved experimental procedures and data. Also, planned work to include dielectrophoretic, Van der Waals forces, and advanced mechanical forces into the software will be discussed.

  11. CapDEM Exercise Gamma: Results and Discussion

    DTIC Science & Technology

    2011-06-01

    internal team’ using CapDEM towards the reality of external groups using the CapDEM approach to address their own problem by themselves. The results...enable and support different internal and external configurations of the classified CEE requires further study, including both technical and security...qu’épreuve et évaluation tout à fait indépendantes, l’Exercice a moins mis l’accent sur une « équipe interne » utilisant l’approche DIGCap et il a plutôt

  12. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  13. Comparison of induction motor field efficiency evaluation methods

    SciTech Connect

    Hsu, J.S.; Kueck, J.D.; Olszewski, M.; Casada, D.A.; Otaduy, P.J.; Tolbert, L.M.

    1996-10-01

    Unlike testing motor efficiency in a laboratory, certain methods given in the IEEE-Std 112 cannot be used for motor efficiency in the field. For example, it is difficult to load a motor in the field with a dynamometer when the motor is already coupled to driven equipment. The motor efficiency field evaluation faces a different environment from that for which the IEEE-Std 112 is chiefly written. A field evaluation method consists of one or several basic methods according to their physical natures. Their intrusivenesses and accuracies are also discussed. This study is useful for field engineers to select or to establish a proper efficiency evaluation method by understanding the theories and error sources of the methods.

  14. Approach to evaluating leak detection methods in underground storage tanks

    NASA Astrophysics Data System (ADS)

    Starr, J.; Broscious, J.; Niaki, S.

    1986-10-01

    The detection and evaluation of leaks in underground storage tanks require a detailed knowledge of conditions both within the tank and in the nearby surroundings. The test apparatus, as constructed, enables data regarding these environmental conditions to be readily obtained and incorporated in a carefully structured test program that minimizes the amount of costly full-scale testing that would otherwise be required to evaluate volumetric leak detection methods for underground storage tanks. In addition, sufficient flexibility has been designed into the apparatus to enable additional evaluations of non-volumetric test methods to be conducted, and different types of tanks and products to be tested in a cost-effective manner.

  15. Evaluation of Stress Loaded Steel Samples Using Selected Electromagnetic Methods

    SciTech Connect

    Chady, T.

    2004-02-26

    In this paper the magnetic leakage flux and eddy current method were used to evaluate changes of materials' properties caused by stress. Seven samples made of ferromagnetic material with different level of applied stress were prepared. First, the leakage magnetic fields were measured by scanning the surface of the specimens with GMR gradiometer. Next, the same samples were evaluated using an eddy current sensor. A comparison between results obtained from both methods was carried out. Finally, selected parameters of the measured signal were calculated and utilized to evaluate level of the applied stress. A strong coincidence between amount of the applied stress and the maximum amplitude of the derivative was confirmed.

  16. Best Estimate Method vs Evaluation Method: a comparison of two techniques in evaluating seismic analysis and design

    SciTech Connect

    Bumpus, S.E.; Johnson, J.J.; Smith, P.D.

    1980-05-01

    The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the traditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC) - seismic input, soil-structure interaction, major structural response, and subsystem response - are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on a model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evaluation Method is also demonstrated.

  17. Application of Bistatic TanDEM-X Interferometry to Measure Lava Flow Volume and Lava Extrusion Rates During the 2012-13 Tolbachik, Kamchatka Fissure Eruption

    NASA Astrophysics Data System (ADS)

    Kubanek, J.; Westerhaus, M.; Heck, B.

    2015-12-01

    Aerial imaging methods are a well approved source for mapping lava flows during eruptions and can serve as a base to assess the eruption dynamics and to determine the affected area. However, clouds and smoke often hinder optical systems like the Earth Observation Advanced Land Imager (EO-1-ALI, operated by NASA) to map lava flows properly, which hence affects its reliability. Furthermore, the amount of lava that is extruded during an eruption cannot be determined from optical images - however, it can significantly contribute to assess the accompanying hazard and risk. One way to monitor active lava flows is to quantify the topographic changes over time while using up-to-date high-resolution digital elevation models (DEMs). Whereas photogrammetric methods still fail when clouds and fume obstruct the sight, innovative radar satellite missions have the potential to generate high-resolution DEMs at any time. The innovative bistatic TanDEM-X (TerraSAR-X Add-on for Digital Elevation Measurements) satellite mission enables for the first time generating high-resolution DEMs from synthetic aperture radar satellite data repeatedly with reasonable costs and high resolution. The satellite mission consists of the two nearly identical satellites TerraSAR-X and TanDEM-X that build a large synthetic aperture radar interferometer with adaptable across- and along-track baselines aiming to generate topographic information globally. In the present study, we apply the TanDEM-X data to study the lava flows that were emplaced during the 2012-13 Tolbachik, Kamchatka fissure eruption. The eruption was composed of very fluid lava flows that effused along a northeast-southwest trending fissure. We used about fifteen bistatic data pairs to generate DEMs prior to, during, and after the eruption. The differencing of the DEMs enables mapping the lava flow field at different times. This allows measuring the extruded volume and to derive the changes in lava extrusion over time.

  18. Infrared image quality evaluation method without reference image

    NASA Astrophysics Data System (ADS)

    Yue, Song; Ren, Tingting; Wang, Chengsheng; Lei, Bo; Zhang, Zhijie

    2013-09-01

    Since infrared image quality depends on many factors such as optical performance and electrical noise of thermal imager, image quality evaluation becomes an important issue which can conduce to both image processing afterward and capability improving of thermal imager. There are two ways of infrared image quality evaluation, with or without reference image. For real-time thermal image, the method without reference image is preferred because it is difficult to get a standard image. Although there are various kinds of methods for evaluation, there is no general metric for image quality evaluation. This paper introduces a novel method to evaluate infrared image without reference image from five aspects: noise, clarity, information volume and levels, information in frequency domain and the capability of automatic target recognition. Generally, the basic image quality is obtained from the first four aspects, and the quality of target is acquired from the last aspect. The proposed method is tested on several infrared images captured by different thermal imagers. Calculate the indicators and compare with human vision results. The evaluation shows that this method successfully describes the characteristics of infrared image and the result is consistent with human vision system.

  19. A Mixed Methods Approach to Understanding School Counseling Program Evaluation: High School Counselors' Methods and Perceptions

    ERIC Educational Resources Information Center

    Aucoin, Jennifer Mangrum

    2013-01-01

    The purpose of this mixed methods concurrent triangulation study was to examine the program evaluation practices of high school counselors. A total of 294 high school counselors in Texas were assessed using a mixed methods concurrent triangulation design. A researcher-developed survey, the School Counseling Program Evaluation Questionnaire…

  20. DEM analysis for AIA/SDO EUV channels using a probabilistic approach to the spectral inverse problem

    NASA Astrophysics Data System (ADS)

    Goryaev, Farid; Parenti, Susanna; Hochedez, Jean-François; Urnov, Alexander

    The Atmospheric Imaging Assembly (AIA) for the Solar Dynamics Observatory (SDO) mis-sion is designed to observe the Sun from the photosphere to the flaring corona. These data have to improve our understanding of processes in the solar atmosphere. The differential emis-sion measure (DEM) analysis is one of the main methods to derive information about coronal optically thin plasma characteristics from EUV and SXR emission. In this work we analyze AIA/SDO EUV channels to estimate their ability to reconstruct DEM(T) distributions. We use an iterative method (called Bayesian iterative method, BIM) within the framework of a probabilistic approach to the spectral inverse problem for determining the thermal structures of the emitting plasma sources (Goryaev et al., submitted to AA). The BIM is an iterative procedure based on Bayes' theorem and used for the reconstruction of DEM profiles. Using the BIM algorithm we performed various numerical tests and model simulations demonstrating abilities of our inversion approach for DEM analysis with AIA/SDO EUV channels.

  1. High-quality seamless DEM generation blending SRTM-1, ASTER GDEM v2 and ICESat/GLAS observations

    NASA Astrophysics Data System (ADS)

    Yue, Linwei; Shen, Huanfeng; Zhang, Liangpei; Zheng, Xianwei; Zhang, Fan; Yuan, Qiangqiang

    2017-01-01

    The absence of a high-quality seamless global digital elevation model (DEM) dataset has been a challenge for the Earth-related research fields. Recently, the 1-arc-second Shuttle Radar Topography Mission (SRTM-1) data have been released globally, covering over 80% of the Earth's land surface (60°N-56°S). However, voids and anomalies still exist in some tiles, which has prevented the SRTM-1 dataset from being directly used without further processing. In this paper, we propose a method to generate a seamless DEM dataset blending SRTM-1, ASTER GDEM v2, and ICESat laser altimetry data. The ASTER GDEM v2 data are used as the elevation source for the SRTM void filling. To get a reliable filling source, ICESat GLAS points are incorporated to enhance the accuracy of the ASTER data within the void regions, using an artificial neural network (ANN) model. After correction, the voids in the SRTM-1 data are filled with the corrected ASTER GDEM values. The triangular irregular network based delta surface fill (DSF) method is then employed to eliminate the vertical bias between them. Finally, an adaptive outlier filter is applied to all the data tiles. The final result is a seamless global DEM dataset. ICESat points collected from 2003 to 2009 were used to validate the effectiveness of the proposed method, and to assess the vertical accuracy of the global DEM products in China. Furthermore, channel networks in the Yangtze River Basin were also extracted for the data assessment.

  2. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  3. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  4. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  5. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  6. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  7. Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses

    ERIC Educational Resources Information Center

    Christ, Thomas W.

    2009-01-01

    Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…

  8. Subjective and Objective Methods of Evaluating Social Programs.

    ERIC Educational Resources Information Center

    Alemi, Farrokh

    1987-01-01

    Trade-offs are implicit in choosing a subjective or objective method for evaluating social programs. The differences between Bayesian and traditional statistics, decision and cost-benefit analysis, and anthropological and traditional case systems illustrate trade-offs in choosing methods because of limited resources. (SLD)

  9. What Can Mixed Methods Designs Offer Professional Development Program Evaluators?

    ERIC Educational Resources Information Center

    Giordano, Victoria; Nevin, Ann

    2007-01-01

    In this paper, the authors describe the benefits and pitfalls of mixed methods designs. They argue that mixed methods designs may be preferred when evaluating professional development programs for p-K-12 education given the new call for accountability in making data-driven decisions. They summarize and critique the studies in terms of limitations…

  10. Methods for the evaluation of alternative disaster warning systems

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.; Anderson, R. J., Jr.; Lanen, W. N.

    1977-01-01

    For each of the methods identified, a theoretical basis is provided and an illustrative example is described. The example includes sufficient realism and detail to enable an analyst to conduct an evaluation of other systems. The methods discussed in the study include equal capability cost analysis, consumers' surplus, and statistical decision theory.

  11. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  12. Coupling photogrammetric data with DFN-DEM model for rock slope hazard assessment

    NASA Astrophysics Data System (ADS)

    Donze, Frederic; Scholtes, Luc; Bonilla-Sierra, Viviana; Elmouttie, Marc

    2013-04-01

    fracture persistency in order to enhance the possible contribution of rock bridges on the failure surface development. It is believed that the proposed methodology can bring valuable complementary information for rock slope stability analysis in presence of complex fractured system for which classical "Factor of Safety" is difficult to express. References • Harthong B., Scholtès L. & F.V. Donzé, Strength characterization of rock masses, using a coupled DEM-DFN model, Geophysical Journal International, doi: 10.1111/j.1365-246X.2012.05642.x, 2012. • Kozicki J & Donzé FV. YADE-OPEN DEM: an open--source software using a discrete element method to simulate granular material, Engineering Computations, 26(7):786-805, 2009 • Kozicki J, Donzé FV. A new open-source software developed for numerical simulations using discrete modeling methods, Comp. Meth. In Appl. Mech. And Eng. 197:4429-4443, 2008. • Poropat, G.V., New methods for mapping the structure of rock masses. In Proceedings, Explo 2001, Hunter Valley, New South Wales, 28-31 October 2001, pp. 253-260, 2001. • Scholtès, L. & Donzé FV. Modelling progressive failure in fractured rock masses using a 3D discrete element method, International Journal of Rock Mechanics and Mining Sciences, 52:18-30, 2012a. • Scholtès, L. & Donzé, F.-V., DEM model for soft and hard rocks: role of grain interlocking on strength, J. Mech. Phys. Solids, doi: 10.1016/j.jmps.2012.10.005, 2012b. • Sirovision, Commonwealth Scientific and Industrial Research Organisation CSIRO, Siro3D Sirovision 3D Imaging Mapping System Manual Version 4.1, 2010

  13. Study on evaluation methods for Rayleigh wave dispersion characteristic

    USGS Publications Warehouse

    Shi, L.; Tao, X.; Kayen, R.; Shi, H.; Yan, S.

    2005-01-01

    The evaluation of Rayleigh wave dispersion characteristic is the key step for detecting S-wave velocity structure. By comparing the dispersion curves directly with the spectra analysis of surface waves (SASW) method, rather than comparing the S-wave velocity structure, the validity and precision of microtremor-array method (MAM) can be evaluated more objectively. The results from the China - US joint surface wave investigation in 26 sites in Tangshan, China, show that the MAM has the same precision with SASW method in 83% of the 26 sites. The MAM is valid for Rayleigh wave dispersion characteristic testing and has great application potentiality for site S-wave velocity structure detection.

  14. System and method for evaluating a wire conductor

    SciTech Connect

    Panozzo, Edward; Parish, Harold

    2013-10-22

    A method of evaluating an electrically conductive wire segment having an insulated intermediate portion and non-insulated ends includes passing the insulated portion of the wire segment through an electrically conductive brush. According to the method, an electrical potential is established on the brush by a power source. The method also includes determining a value of electrical current that is conducted through the wire segment by the brush when the potential is established on the brush. The method additionally includes comparing the value of electrical current conducted through the wire segment with a predetermined current value to thereby evaluate the wire segment. A system for evaluating an electrically conductive wire segment is also disclosed.

  15. DOE methods for evaluating environmental and waste management samples.

    SciTech Connect

    Goheen, S C; McCulloch, M; Thomas, B L; Riley, R G; Sklarew, D S; Mong, G M; Fadeff, S K

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others.

  16. The topographic grain concept in DEM-based geomorphometric mapping

    NASA Astrophysics Data System (ADS)

    Józsa, Edina

    2016-04-01

    A common drawback of geomorphological analyses based on digital elevation datasets is the definition of search window size for the derivation of morphometric variables. The fixed-size neighbourhood determines the scale of the analysis and mapping, which can lead to the generalization of smaller surface details or the elimination of larger landform elements. The methods of DEM-based geomorphometric mapping are constantly developing into the direction of multi-scale landform delineation, but the optimal threshold for search window size is still a limiting factor. A possible way to determine the suitable value for the parameter is to consider the topographic grain principle (Wood, W. F. - Snell, J. B. 1960, Pike, R. J. et al. 1989). The calculation is implemented as a bash shell script for GRASS GIS to determine the optimal threshold for the r.geomorphon module. The approach relies on the potential of the topographic grain to detect the characteristic local ridgeline-to-channel spacing. By calculating the relative relief values with nested neighbourhood matrices it is possible to define a break-point where the increase rate of local relief encountered by the sample is significantly reducing. The geomorphons approach (Jasiewicz, J. - Stepinski, T. F. 2013) is a cell-based DEM classification method for the identification of landform elements at a broad range of scales by using line-of-sight technique. The landforms larger than the maximum lookup distance are broken down to smaller elements therefore the threshold needs to be set for a relatively large value. On the contrary, the computational requirements and the size of the study sites determine the upper limit for the value. Therefore the aim was to create a tool that would help to determine the optimal parameter for r.geomorphon tool. As a result it would be possible to produce more objective and consistent maps with achieving the full efficiency of this mapping technique. For the thorough analysis on the

  17. Spatial Characterization of Landscapes through Multifractal Analysis of DEM

    PubMed Central

    Aguado, P. L.; Del Monte, J. P.; Moratiel, R.; Tarquis, A. M.

    2014-01-01

    Landscape evolution is driven by abiotic, biotic, and anthropic factors. The interactions among these factors and their influence at different scales create a complex dynamic. Landscapes have been shown to exhibit numerous scaling laws, from Horton's laws to more sophisticated scaling of heights in topography and river network topology. This scaling and multiscaling analysis has the potential to characterise the landscape in terms of the statistical signature of the measure selected. The study zone is a matrix obtained from a digital elevation model (DEM) (map 10 × 10 m, and height 1 m) that corresponds to homogeneous region with respect to soil characteristics and climatology known as “Monte El Pardo” although the water level of a reservoir and the topography play a main role on its organization and evolution. We have investigated whether the multifractal analysis of a DEM shows common features that can be used to reveal the underlying patterns and information associated with the landscape of the DEM mapping and studied the influence of the water level of the reservoir on the applied analysis. The results show that the use of the multifractal approach with mean absolute gradient data is a useful tool for analysing the topography represented by the DEM. PMID:25177728

  18. The emergence of mixing methods in the field of evaluation.

    PubMed

    Greene, Jennifer C

    2015-06-01

    When and how did the contemporary practice of mixing methods in social inquiry get started? What events transpired to catalyze the explosive conceptual development and practical adoption of mixed methods social inquiry over recent decades? How has this development progressed? What "next steps" would be most constructive? These questions are engaged in this personally narrative account of the beginnings of the contemporary mixed methods phenomenon in the field of evaluation from the perspective of a methodologist who was there.

  19. A numerical analysis method for evaluating rod lenses using the Monte Carlo method.

    PubMed

    Yoshida, Shuhei; Horiuchi, Shuma; Ushiyama, Zenta; Yamamoto, Manabu

    2010-12-20

    We propose a numerical analysis method for evaluating GRIN lenses using the Monte Carlo method. Actual measurements of the modulation transfer function (MTF) of a GRIN lens using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. However the results differ greatly from those from experiments. We therefore developed an evaluation method similar to the experimental system based on the Monte Carlo method and verified that it more closely matches the experimental results than the conventional method.

  20. New method for evaluation of tongue-coating status.

    PubMed

    Shimizu, T; Ueda, T; Sakurai, K

    2007-06-01

    The purpose of this study was to determine the viability of Tongue Coating Index, which is a new method for evaluating tongue-coating status. To determine the reliability and reproducibility of our new evaluation criteria (Score 0: Tongue coating not visible; Score 1: Tongue coating thin, papillae of tongue visible; Score 2: Tongue coating very thick, papillae of tongue not visible), 10 observers evaluated 20 photographs of tongues. Each tongue surface was divided into nine sections. Observers evaluated each section according to our new criteria and each score for tongue-coating status was recorded in the pertinent section of the Tongue Coating Record form. They repeated the same evaluation 2 weeks after the first evaluation. The relationship between the scores obtained and number of oral microorganisms was investigated in 50 edentulous patients. Tongue coating was collected from the tongue surface after evaluation of tongue-coating status. The total number of anaerobic bacteria and the number of Candida species were counted from the specimens collected. Interobserver agreement and intraobserver agreement were 0.66 and 0.80 by Cohen's kappa, respectively. No significant difference was observed in the number of Candida species among the three scores. The number of total anaerobic bacteria, however, was significantly different among the scores (P < 0.05). Therefore, we conclude that our method for evaluating tongue-coating status offers new criteria that are superior in reliability and reproducibility, and that also reflect the total number of anaerobic bacteria present on the dorsum of the tongue.

  1. User Experience Evaluation Methods in Product Development (UXEM'09)

    NASA Astrophysics Data System (ADS)

    Roto, Virpi; Väänänen-Vainio-Mattila, Kaisa; Law, Effie; Vermeeren, Arnold

    High quality user experience (UX) has become a central competitive factor of product development in mature consumer markets [1]. Although the term UX originated from industry and is a widely used term also in academia, the tools for managing UX in product development are still inadequate. A prerequisite for designing delightful UX in an industrial setting is to understand both the requirements tied to the pragmatic level of functionality and interaction and the requirements pertaining to the hedonic level of personal human needs, which motivate product use [2]. Understanding these requirements helps managers set UX targets for product development. The next phase in a good user-centered design process is to iteratively design and evaluate prototypes [3]. Evaluation is critical for systematically improving UX. In many approaches to UX, evaluation basically needs to be postponed until the product is fully or at least almost fully functional. However, in an industrial setting, it is very expensive to find the UX failures only at this phase of product development. Thus, product development managers and developers have a strong need to conduct UX evaluation as early as possible, well before all the parts affecting the holistic experience are available. Different types of products require evaluation on different granularity and maturity levels of a prototype. For example, due to its multi-user characteristic, a community service or an enterprise resource planning system requires a broader scope of UX evaluation than a microwave oven or a word processor that is meant for a single user at a time. Before systematic UX evaluation can be taken into practice, practical, lightweight UX evaluation methods suitable for different types of products and different phases of product readiness are needed. A considerable amount of UX research is still about the conceptual frameworks and models for user experience [4]. Besides, applying existing usability evaluation methods (UEMs) without

  2. On urban road traffic state evaluation index system and method

    NASA Astrophysics Data System (ADS)

    Su, Fei; Dong, Honghui; Jia, Limin; Sun, Xuan

    2017-01-01

    Traffic state evaluation is a basic and critical work in the research on road traffic congestion. It can provide basic data support for the improvement measures and information release in traffic management and service. The aim of this research is to obtain a comprehensive value to describe traffic state accurately based on the evaluation index system. In this paper, it is carried out using fuzzy c-means (FCM) algorithm and fuzzy entropy weight method. In the framework, traffic flow was classified into six different states to determine the fuzzy range of indices using the improved FCM clustering analysis. Besides, fuzzy entropy weight method is proposed to compute the evaluation result of traffic state for section, road and road network, respectively. The experiments based on the traffic information in a subset of Beijing’s road network prove that the findings of traffic evaluation are in accordance with the actual situation and people’s sense of traffic state.

  3. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  4. [An Introduction to Methods for Evaluating Health Care Technology].

    PubMed

    Lee, Ting-Ting

    2015-06-01

    The rapid and continual advance of healthcare technology makes ensuring that this technology is used effectively to achieve its original goals a critical issue. This paper presents three methods that may be applied by healthcare professionals in the evaluation of healthcare technology. These methods include: the perception/experiences of users, user work-pattern changes, and chart review or data mining. The first method includes two categories: using interviews to explore the user experience and using theory-based questionnaire surveys. The second method applies work sampling to observe the work pattern changes of users. The last method conducts chart reviews or data mining to analyze the designated variables. In conclusion, while evaluative feedback may be used to improve the design and development of healthcare technology applications, the informatics competency and informatics literacy of users may be further explored in future research.

  5. Using analytic network process for evaluating mobile text entry methods.

    PubMed

    Ocampo, Lanndon A; Seva, Rosemary R

    2016-01-01

    This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods.

  6. Development of characteristic evaluation method on FR cycle system

    SciTech Connect

    Shinoda, Y.; Shiotani, H.; Hirao, K.

    2002-07-01

    The present report is intended to explain some results of the characteristic evaluation work on various FR cycle system concepts, in the 1. phase of the JNC's 'Feasibility Study on Commercialized Fast Reactor Cycle System' (from 1999 to March 2001). The development of the evaluation method is carried out for six criteria, such as Economics, Effective utilization of uranium resource, Reduction of environmental impact, Safety, Proliferation resistance, and Technological feasibility. (authors)

  7. Study on Turbulent Modeling in Gas Entrainment Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Ohshima, Hiroyuki; Nakamine, Yoshiaki; Imai, Yasutomo

    Suppression of gas entrainment (GE) phenomena caused by free surface vortices are very important to establish an economically superior design of the sodium-cooled fast reactor in Japan (JSFR). However, due to the non-linearity and/or locality of the GE phenomena, it is not easy to evaluate the occurrences of the GE phenomena accurately. In other words, the onset condition of the GE phenomena in the JSFR is not predicted easily based on scaled-model and/or partial-model experiments. Therefore, the authors are developing a CFD-based evaluation method in which the non-linearity and locality of the GE phenomena can be considered. In the evaluation method, macroscopic vortex parameters, e.g. circulation, are determined by three-dimensional CFD and then, GE-related parameters, e.g. gas core (GC) length, are calculated by using the Burgers vortex model. This procedure is efficient to evaluate the GE phenomena in the JSFR. However, it is well known that the Burgers vortex model tends to overestimate the GC length due to the lack of considerations on some physical mechanisms. Therefore, in this study, the authors develop a turbulent vortex model to evaluate the GE phenomena more accurately. Then, the improved GE evaluation method with the turbulent viscosity model is validated by analyzing the GC lengths observed in a simple experiment. The evaluation results show that the GC lengths analyzed by the improved method are shorter in comparison to the original method, and give better agreement with the experimental data.

  8. Evaluation methods for association rules in spatial knowlegde base

    NASA Astrophysics Data System (ADS)

    Niu, X.; Ji, X.

    2014-04-01

    Association rule is an important model in data mining. It describes the relationship between predicates in transactions, makes the expression of knowledge hidden in data more specific and clear. While the developing and applying of remote sensing technology and automatic data collection tools in recent decades, tremendous amounts of spatial and non-spatial data have been collected and stored in large spatial database, so association rules mining from spatial database becomes a significant research area with extensive applications. How to find effective, reliable and interesting association rules from vast information for helping people analyze and make decision has become a significant issue. Evaluation methods measure spatial association rules with evaluation criteria. On the basis of analyzing the existing evaluation criteria, this paper improved the novelty evaluation method, built a spatial knowledge base, and proposed a new evaluation process based on the support-confidence evaluation system. Finally, the feasibility of the new evaluation process was validated by an experiment with real-world geographical spatial data.

  9. Methods for Evaluating Text Extraction Toolkits: An Exploratory Investigation

    DTIC Science & Technology

    2015-01-22

    M T R 1 4 0 4 4 3 R 2 M I T R E T E C H N I C A L R E P O R T Methods for Evaluating Text Extraction Toolkits: An...JAN 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Methods for Evaluating Text Extraction Toolkits: An...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Text extraction

  10. Performance evaluation of BPM system in SSRF using PCA method

    NASA Astrophysics Data System (ADS)

    Chen, Zhi-Chu; Leng, Yong-Bin; Yan, Ying-Bing; Yuan, Ren-Xian; Lai, Long-Wei

    2014-07-01

    The beam position monitor (BPM) system is of most importance in a light source. The capability of the BPM depends on the resolution of the system. The traditional standard deviation on the raw data method merely gives the upper limit of the resolution. Principal component analysis (PCA) had been introduced in the accelerator physics and it could be used to get rid of the actual signals. Beam related information was extracted before the evaluation of the BPM performance. A series of studies had been made in the Shanghai Synchrotron Radiation Facility (SSRF) and PCA was proved to be an effective and robust method in the performance evaluations of our BPM system.

  11. Evaluation of AMOEBA: a spectral-spatial classification method

    USGS Publications Warehouse

    Jenson, Susan K.; Loveland, Thomas R.; Bryant, J.

    1982-01-01

    Muitispectral remotely sensed images have been treated as arbitrary multivariate spectral data for purposes of clustering and classifying. However, the spatial properties of image data can also be exploited. AMOEBA is a clustering and classification method that is based on a spatially derived model for image data. In an evaluation test, Landsat data were classified with both AMOEBA and a widely used spectral classifier. The test showed that irrigated crop types can be classified as accurately with the AMOEBA method as with the generally used spectral method ISOCLS; the AMOEBA method, however, requires less computer time.

  12. High mobility of large mass movements: a study by means of FEM/DEM simulations

    NASA Astrophysics Data System (ADS)

    Manzella, I.; Lisjak, A.; Grasselli, G.

    2013-12-01

    Large mass movements, such as rock avalanches and large volcanic debris avalanches are characterized by extremely long propagation, which cannot be modelled using normal sliding friction law. For this reason several studies and theories derived from field observation, physical theories and laboratory experiments, exist to try to explain their high mobility. In order to investigate more into deep some of the processes recalled by these theories, simulations have been run with a new numerical tool called Y-GUI based on the Finite Element-Discrete Element Method FEM/DEM. The FEM/DEM method is a numerical technique developed by Munjiza et al. (1995) where Discrete Element Method (DEM) algorithms are used to model the interaction between different solids, while Finite Element Method (FEM) principles are used to analyze their deformability being also able to explicitly simulate material sudden loss of cohesion (i.e. brittle failure). In particular numerical tests have been run, inspired by the small-scale experiments done by Manzella and Labiouse (2013). They consist of rectangular blocks released on a slope; each block is a rectangular discrete element made of a mesh of finite elements enabled to fragment. These simulations have highlighted the influence on the propagation of block packing, i.e. whether the elements are piled into geometrical ordinate structure before failure or they are chaotically disposed as a loose material, and of the topography, i.e. whether the slope break is smooth and regular or not. In addition the effect of fracturing, i.e. fragmentation, on the total runout have been studied and highlighted.

  13. Coastal Digital Elevation Models (DEMs) for tsunami hazard assessment on the French coasts

    NASA Astrophysics Data System (ADS)

    Maspataud, Aurélie; Biscara, Laurie; Hébert, Hélène; Schmitt, Thierry; Créach, Ronan

    2015-04-01

    bathymetric and topographic data, …) were gathered. Consequently, datasets were first assessed internally for both quality and accuracy and then externally with other to ensure consistency and gradual topographic/bathymetric transitioning along limits of the datasets. The heterogeneous ages of the input data also stress the importance of taking into account the temporal variability of bathymetric features, especially in the active areas (sandbanks, estuaries, channels). Locally, gaps between marine (hydrographic surveys) and terrestrial (topographic LIDAR) data have required the introduction of new methods and tools to solve interpolation. Through these activities the goal is to improve the production line and to enhance tools and procedures used for the improvement of processing, validation and qualification algorithms of bathymetric data, data collection work, automation of processing and integration process for conception of improved both bathymetric and topographic DEMs, merging data collected. This work is supported by a French ANR program in the frame of "Investissements d'Avenir", under the grant ANR-11-RSNR-00023-01.

  14. DATA SYNTHESIS AND METHOD EVALUATION FOR BRAIN IMAGING GENETICS.

    PubMed

    Sheng, Jinhua; Kim, Sungeun; Yan, Jingwen; Moore, Jason; Saykin, Andrew; Shen, Li

    2014-05-01

    Brain imaging genetics is an emergent research field where the association between genetic variations such as single nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is evaluated. Sparse canonical correlation analysis (SCCA) is a bi-multivariate analysis method that has the potential to reveal complex multi-SNP-multi-QT associations. We present initial efforts on evaluating a few SCCA methods for brain imaging genetics. This includes a data synthesis method to create realistic imaging genetics data with known SNP-QT associations, application of three SCCA algorithms to the synthetic data, and comparative study of their performances. Our empirical results suggest, approximating covariance structure using an identity or diagonal matrix, an approach used in these SCCA algorithms, could limit the SCCA capability in identifying the underlying imaging genetics associations. An interesting future direction is to develop enhanced SCCA methods that effectively take into account the covariance structures in the imaging genetics data.

  15. Participatory Training Evaluation Method (PATEM) as a Collaborative Evaluation Capacity Building Strategy

    ERIC Educational Resources Information Center

    Kuzmin, Alexey

    2012-01-01

    This article describes Participatory Training Evaluation Method (PATEM) of measuring participants' reaction to the training. PATEM provides rich information; allows to document evaluation findings; becomes organic part of the training that helps participants process their experience individually and as a group; makes sense to participants; is an…

  16. Force Evaluation in the Lattice Boltzmann Method Involving Curved Geometry

    NASA Technical Reports Server (NTRS)

    Mei, Renwei; Yu, Dazhi; Shyy, Wei; Luo, Li-Shi; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The present work investigates two approaches for force evaluation in the lattice Boltzmann equation: the momentum- exchange method and the stress-integration method on the surface of a body. The boundary condition for the particle distribution functions on curved geometries is handled with second order accuracy based on our recent works. The stress-integration method is computationally laborious for two-dimensional flows and in general difficult to implement for three-dimensional flows, while the momentum-exchange method is reliable, accurate, and easy to implement for both two-dimensional and three-dimensional flows. Several test cases are selected to evaluate the present methods, including: (i) two-dimensional pressure-driven channel flow; (ii) two-dimensional uniform flow past a column of cylinders; (iii) two-dimensional flow past a cylinder asymmetrically placed in a channel (with vortex shedding); (iv) three-dimensional pressure-driven flow in a circular pipe; and (v) three-dimensional flow past a sphere. The drag evaluated by using the momentum-exchange method agrees well with the exact or other published results.

  17. Survey research methods in evaluation and case-control studies.

    PubMed

    Kalton, Graham; Piesse, Andrea

    2007-04-15

    Survey research methods are widely used in two types of analytic studies: evaluation studies that measure the effects of interventions; and population-based case-control studies that investigate the effects of various risk factors on the presence of disease. This paper provides a broad overview of some design and analysis issues related to such studies, illustrated with examples. The lack of random assignment to treatment and control groups in many evaluation studies makes controlling for confounders critically important. Confounder control can be achieved by matching in the design and by various alternative methods in the analysis. One popular analytic method of controlling for confounders is propensity scoring, which bears a close resemblance to survey weighting. The use of population-based controls has become common in case-control studies. For reasons of cost, population-based controls are often identified by telephone surveys using random digit dialling (RDD) sampling methods. However, RDD surveys are now experiencing serious problems with response rates. A recent alternative approach is to select controls from frames such as driver license lists that contain valuable demographic information for use in matching. Methods of analysis developed in the survey sampling literature are applicable, at least to some degree, in the analyses of evaluation and population-based case-control studies. In particular, the effects of complex sample designs can be taken into account using survey sampling variance estimation methods. Several survey analysis software packages are available for carrying out the computations.

  18. DOE methods for evaluating environmental and waste management samples

    SciTech Connect

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

  19. Simulation of a tablet coating process at different scales using DEM.

    PubMed

    Boehling, P; Toschkoff, G; Just, S; Knop, K; Kleinebudde, P; Funke, A; Rehbaum, H; Rajniak, P; Khinast, J G

    2016-10-10

    Spray coating of tablets is an important unit operation in the pharmaceutical industry and is mainly used for modified release, enteric protection, better appearance and brand recognition. It can also be used to apply an additional active pharmaceutical ingredient to the tablet core. Scale-up of such a process is an important step in commercialization. However, scale-up is not trivial and frequently, at manufacturing scales the required coating quality cannot be reached. Thus, we propose a method where laboratory experiments are carried out, yet scale-up is done via computational methods, i.e., by extrapolating results to larger scales. In the recent years, the Discrete Element Method (DEM) has widely been used to simulate tablet behavior in a laboratory scale drum coater. Due the increasing computational power and more sophisticated DEM algorithms, it has become possible to simulate millions of particles on regular PCs and model industrial scale tablet coating devices. In this work, simulations were performed on the laboratory, pilot and industrial scales and DEM was used to study how different scale-up rules influence the bed behavior on larger scales. The material parameters of the tablets were measured in the laboratory and a glued sphere approach was applied to model the tablet shape. The results include a vast amount of qualitative and quantitative data at the different scales. In conclusion, the evolution of the inter-tablet coating variation for the different scales and process parameters is presented. The results suggest that keeping the Froude number constant during the scale up process leads to faster processes as the cycle time is shorter and the spray residence time is more uniform when compared to keeping the circumferential velocity constant.

  20. A Safety Index and Method for Flightdeck Evaluation

    NASA Technical Reports Server (NTRS)

    Latorella, Kara A.

    2000-01-01

    If our goal is to improve safety through machine, interface, and training design, then we must define a metric of flightdeck safety that is usable in the design process. Current measures associated with our notions of "good" pilot performance and ultimate safety of flightdeck performance fail to provide an adequate index of safe flightdeck performance for design evaluation purposes. The goal of this research effort is to devise a safety index and method that allows us to evaluate flightdeck performance holistically and in a naturalistic experiment. This paper uses Reason's model of accident causation (1990) as a basis for measuring safety, and proposes a relational database system and method for 1) defining a safety index of flightdeck performance, and 2) evaluating the "safety" afforded by flightdeck performance for the purpose of design iteration. Methodological considerations, limitations, and benefits are discussed as well as extensions to this work.

  1. Statistical methods for evaluating the attainment of cleanup standards

    SciTech Connect

    Gilbert, R.O.; Simpson, J.C.

    1992-12-01

    This document is the third volume in a series of volumes sponsored by the US Environmental Protection Agency (EPA), Statistical Policy Branch, that provide statistical methods for evaluating the attainment of cleanup Standards at Superfund sites. Volume 1 (USEPA 1989a) provides sampling designs and tests for evaluating attainment of risk-based standards for soils and solid media. Volume 2 (USEPA 1992) provides designs and tests for evaluating attainment of risk-based standards for groundwater. The purpose of this third volume is to provide statistical procedures for designing sampling programs and conducting statistical tests to determine whether pollution parameters in remediated soils and solid media at Superfund sites attain site-specific reference-based standards. This.document is written for individuals who may not have extensive training or experience with statistical methods. The intended audience includes EPA regional remedial project managers, Superfund-site potentially responsible parties, state environmental protection agencies, and contractors for these groups.

  2. Methods of Evaluating Child Welfare in Indian Country: An Illustration

    ERIC Educational Resources Information Center

    Fox, Kathleen; Cross, Terry L.; John, Laura; Carter, Patricia; Pavkov, Thomas; Wang, Ching-Tung; Diaz, Javier

    2011-01-01

    The poor quality and quantity of data collected in tribal communities today reflects a lack of true community participation and commitment. This is especially problematic for evaluation studies, in which the needs and desires of the community should be the central focus. This challenge can be met by emphasizing indigenous methods and voice. The…

  3. Evaluation of methods for nondestructive testing of brazed joints

    NASA Technical Reports Server (NTRS)

    Kanno, A.

    1968-01-01

    Evaluation of nondestructive methods of testing brazed joints reveals that ultrasonic testing is effective in the detection of nonbonds in diffusion bonded samples. Radiography provides excellent resolutions of void or inclusion defects, and the neutron radiographic technique shows particular advantage for brazing materials containing cadmium.

  4. METHODS FOR EVALUATING THE SUSTAINABILITY OF GREEN PROCESSES

    EPA Science Inventory

    Methods for Evaluating the Sustainability of Green Processes

    By Raymond L. Smith and Michael A. Gonzalez
    U.S. Environmental Protection Agency
    Office of Research and Development
    26 W. Martin Luther King Dr.
    Cincinnati, OH 45268 USA

    Theme: New Challenges...

  5. EVALUATION OF TWO METHODS FOR PREDICTION OF BIOACCUMULATION FACTORS

    EPA Science Inventory

    Two methods for deriving bioaccumulation factors (BAFs) used by the U.S. Environmental Protection Agency (EPA) in development of water quality criteria were evaluated using polychlorinated biphenyls (PCB) data from the Hudson River and Green Bay ecosystems. Greater than 90% of th...

  6. Endoscopic Evaluation of Adenoids: Reproducibility Analysis of Current Methods

    PubMed Central

    Hermann, Juliana Sato; Sallum, Ana Carolina; Pignatari, Shirley Shizue Nagata

    2013-01-01

    Objectives To investigate intra- and interexaminers' reproducibility of usual adenoid hypertrophy assessment methods, according to nasofiberendoscopic examination. Methods Forty children of both sexes, ages ranging between 4 and 14 years, presenting with nasal obstruction and oral breathing suspected to be caused by adenoid hypertrophy, were enrolled in this study. Patients were evaluated by nasofiberendoscopy, and records were referred to and evaluated by two experienced otolaryngologists. Examiners analysed the records according to different evaluation methods; i.e., estimated, and measured percentage of choanal occlusion; as well as subjective and objective classificatory systems of adenoid hypertrophy. Results Data disclosed excellent intraexaminer reproducibility for both estimated and measured choanal occlusion. analysis revealed lower reproducibility rates of estimated in relation to measured choanal occlusion. Measured choanal occlusion also demonstrated less agreement among evaluations made through the right and left sides of the nasal cavity. Alternatively, intra- and interexaminers reliability analysis revealed higher agreement for subjective than objective classificatory system. Besides, subjective method demonstrated higher agreement than the objective classificatory system, when opposite sides were compared. Conclusion Our results suggest that measured is superior to estimated percentage of choanal occlusion, particularly if employed bilaterally, diminishing the lack of agreement between sides. When adenoid categorization is used instead, the authors recommend subjective rather than objective classificatory system of adenoid hypertrophy. PMID:23526477

  7. Evaluation of Alternative Difference-in-Differences Methods

    ERIC Educational Resources Information Center

    Yu, Bing

    2013-01-01

    Difference-in-differences (DID) strategies are particularly useful for evaluating policy effects in natural experiments in which, for example, a policy affects some schools and students but not others. However, the standard DID method may produce biased estimation of the policy effect if the confounding effect of concurrent events varies by…

  8. DEVELOPMENT OF CRITERIA AND METHODS FOR EVALUATING TRAINER AIRCRAFT EFFECTIVENESS.

    ERIC Educational Resources Information Center

    KUSEWITT, J.B.

    THE PURPOSE OF THIS STUDY WAS TO DEVELOP A METHOD FOR DETERMINING OBJECTIVE MEASURES OF TRAINER AIRCRAFT EFFECTIVENESS TO EVALUATE PROGRAM ALTERNATIVES FOR TRAINING PILOTS FOR FLEET FIGHTER AND ATTACK-TYPE AIRCRAFT. THE TRAINING SYLLABUS WAS BASED ON AVERAGE STUDENT ABILITY. THE BASIC PROBLEM WAS TO ESTABLISH QUANTITATIVE TIME-DIFFICULTY…

  9. Administrator Evaluation: Concepts, Methods, Cases in Higher Education.

    ERIC Educational Resources Information Center

    Farmer, Charles H.

    Designed for faculty and administration in higher education, the book describes concepts, methods, and case studies in the field of administrative assessment. The first section explores issues and perspectives in three chapters authored by Charles H. Farmer: "Why Evaluate Administrators?", "How Can Administrators be…

  10. Program Evaluation of the Sustainability of Teaching Methods

    ERIC Educational Resources Information Center

    Bray, Cathy

    2008-01-01

    This paper suggests a particular question that higher education researchers might ask: "Do educational programs use teaching methods that are environmentally, socially and economically sustainable?" It further proposes that program evaluation research (PER) can be used to answer the question. Consideration is given to: a) program…

  11. The Diffusion of Evaluation Methods among Public Relations Practitioners.

    ERIC Educational Resources Information Center

    Dozier, David M.

    A study explored the relationships between public relations practitioners' organizational roles and the type of evaluation methods they used on the job. Based on factor analysis of role data obtained from an earlier study, four organizational roles were defined and ranked: communication manager, media relations specialist, communication liaison,…

  12. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  13. Holistic Evaluation of Lightweight Operating Systems using the PERCU Method

    SciTech Connect

    Kramer, William T.C.; He, Yun; Carter, Jonathan; Glenski, Joseph; Rippe, Lynn; Cardo, Nicholas

    2008-05-01

    The scale of Leadership Class Systems presents unique challenges to the features and performance of operating system services. This paper reports results of comprehensive evaluations of two Light Weight Operating Systems (LWOS), Cray's Catamount Virtual Node (CVN) and Linux Environment (CLE) operating systems, on the exact same large-scale hardware. The evaluation was carried out over a 5-month period on NERSC's 19,480 core Cray XT-4, Franklin, using a comprehensive evaluation method that spans Performance, Effectiveness, Reliability, Consistency and Usability criteria for all major subsystems and features. The paper presents the results of the comparison between CVN and CLE, evaluates their relative strengths, and reports observations regarding the world's largest Cray XT-4 as well.

  14. A Rapid Usability Evaluation (RUE) Method for Health Information Technology.

    PubMed

    Russ, Alissa L; Baker, Darrell A; Fahner, W Jeffrey; Milligan, Bryce S; Cox, Leeann; Hagg, Heather K; Saleem, Jason J

    2010-11-13

    Usability testing can help generate design ideas to enhance the quality and safety of health information technology. Despite these potential benefits, few healthcare organizations conduct systematic usability testing prior to software implementation. We used a Rapid Usability Evaluation (RUE) method to apply usability testing to software development at a major VA Medical Center. We describe the development of the RUE method, provide two examples of how it was successfully applied, and discuss key insights gained from this work. Clinical informaticists with limited usability training were able to apply RUE to improve software evaluation and elected to continue to use this technique. RUE methods are relatively simple, do not require advanced training or usability software, and should be easy to adopt. Other healthcare organizations may be able to implement RUE to improve software effectiveness, efficiency, and safety.

  15. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    NASA Astrophysics Data System (ADS)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5< H<1, characterizes long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  16. Evaluation of Low-Tech Indoor Remediation Methods ...

    EPA Pesticide Factsheets

    Report This study identified, collected, evaluated, and summarized available articles, reports, guidance documents, and other pertinent information related to common housekeeping activities within the United States. This resulted in a summary compendium including relevant information about multiple low-tech cleaning methods from the literature search results. Through discussion and prioritization, an EPA project team, made up of several EPA scientists and emergency responders, focused the information into a list of 14 housekeeping activities for decontamination evaluation testing. These types of activities are collectively referred to as “low-tech” remediation methods because of the comparative simple tools, equipment, and operations involved. Similarly, eight common household surfaces were chosen that were contaminated using three different contamination conditions. Thirty-three combinations of methods and surfaces were chosen for testing under the three contamination conditions for a total of 99 tests.

  17. The Shuttle Radar Topography Mission: A Global DEM

    NASA Technical Reports Server (NTRS)

    Farr, Tom G.; Kobrick, Mike

    2000-01-01

    Digital topographic data are critical for a variety of civilian, commercial, and military applications. Scientists use Digital Elevation Models (DEM) to map drainage patterns and ecosystems, and to monitor land surface changes over time. The mountain-building effects of tectonics and the climatic effects of erosion can also be modeled with DEW The data's military applications include mission planning and rehearsal, modeling and simulation. Commercial applications include determining locations for cellular phone towers, enhanced ground proximity warning systems for aircraft, and improved maps for backpackers. The Shuttle Radar Topography Mission (SRTM) (Fig. 1), is a cooperative project between NASA and the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense. The mission is designed to use a single-pass radar interferometer to produce a digital elevation model of the Earth's land surface between about 60 degrees north and south latitude. The DEM will have 30 m pixel spacing and about 15 m vertical errors.

  18. Evaluation of VOC emission measurement methods for paint spray booths.

    PubMed

    Eklund, B M; Nelson, T P

    1995-03-01

    Interest in regulations to control solvent emissions from automotive painting systems is increasing, especially in ozone nonattainment areas. Therefore, an accurate measurement method for VOC emissions from paint spray booths used in the automotive industry is needed to ascertain the efficiency of the spray booth capture and the total emissions. This paper presents the results of a laboratory study evaluating potential VOC sampling and analytical methods used in estimating paint spray booth emissions, and discusses these results relative to other published data. Eight test methods were selected for evaluation. The accuracy of each sampling and analytical method was determined using test atmospheres of known concentration and composition that closely matched the actual exhaust air from paint spray booths. The solvent mixture to generate the test atmospheres contained a large proportion of polar, oxygenated hydrocarbons such as ketones and alcohols. A series of identical tests was performed for each sampling/analytical method with each test atmosphere to assess the precision of the methods. The study identified significant differences among the test methods in terms of accuracy, precision, cost, and complexity.

  19. Dem Generation with WORLDVIEW-2 Images

    NASA Astrophysics Data System (ADS)

    Büyüksalih, G.; Baz, I.; Alkan, M.; Jacobsen, K.

    2012-07-01

    For planning purposes 42 km coast line of the Black Sea, starting at the Bosporus going in West direction, with a width of approximately 5 km, was imaged by WorldView-2. Three stereo scenes have been oriented at first by 3D-affine transformation and later by bias corrected RPC solution. The result is nearly the same, but it is limited by identification of the control points in the images. Nevertheless after blunder elimination by data snooping root mean square discrepancies below 1 pixel have been reached. The root mean square discrepancy at control point height reached 0.5 m up to 1.3 m with a base to height relation between 1:1.26 and 1:1.80. Digital Surface models (DSM) with 4 m spacing have been generated by least squares matching with region growing, supported by image pyramids. A higher percentage of the mountainous area is covered by forest, requiring the approximation based on image pyramids. In the forest area the approximation just by region growing leads to larger gaps in the DSM. Caused by the good image quality of WorldView-2 the correlation coefficients reached by least squares matching are high and even in most forest areas a satisfying density of accepted points was reached. Two stereo models have an overlapping area of 1.6 km times 6.7 km allowing an accuracy evaluation. Small, but nevertheless significant differences in scene orientation have been eliminated by least squares shift of both overlapping height models to each other. The root mean square differences of both independent DSM are 1.06m or as a function of terrain inclination 0.74 m + 0.55 m  tangent (slope). The terrain inclination in the average is 7° with 12% exceeding 17°. The frequency distribution of height discrepancies is not far away from normal distribution, but as usual, larger discrepancies are more often available as corresponding to normal distribution. This also can be seen by the normalized medium absolute deviation (NMAS) related to 68% probability level of 0.83m

  20. BOREAS Regional DEM in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Knapp, David; Verdin, Kristine; Hall, Forrest G. (Editor)

    2000-01-01

    This data set is based on the GTOPO30 Digital Elevation Model (DEM) produced by the United States Geological Survey EROS Data Center (USGS EDC). The BOReal Ecosystem-Atmosphere Study (BOREAS) region (1,000 km x 1000 km) was extracted from the GTOPO30 data and reprojected by BOREAS staff into the Albers Equal-Area Conic (AEAC) projection. The pixel size of these data is 1 km. The data are stored in binary, image format files.

  1. Operator performance evaluation using multi criteria decision making methods

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah

    2014-06-01

    Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.

  2. A new method to evaluate human-robot system performance

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  3. A new method to evaluate human-robot system performance.

    PubMed

    Rodriguez, G; Weisbin, C R

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  4. Development of evaluation method for software hazard identification techniques

    SciTech Connect

    Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.

    2006-07-01

    This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)

  5. Fractal Image Informatics: from SEM to DEM

    NASA Astrophysics Data System (ADS)

    Oleschko, K.; Parrot, J.-F.; Korvin, G.; Esteves, M.; Vauclin, M.; Torres-Argüelles, V.; Salado, C. Gaona; Cherkasov, S.

    2008-05-01

    In this paper, we introduce a new branch of Fractal Geometry: Fractal Image Informatics, devoted to the systematic and standardized fractal analysis of images of natural systems. The methods of this discipline are based on the properties of multiscale images of selfaffine fractal surfaces. As proved in the paper, the image inherits the scaling and lacunarity of the surface and of its reflectance distribution [Korvin, 2005]. We claim that the fractal analysis of these images must be done without any smoothing, thresholding or binarization. Two new tools of Fractal Image Informatics, firmagram analysis (FA) and generalized lacunarity (GL), are presented and discussed in details. These techniques are applicable to any kind of image or to any observed positive-valued physical field, and can be used to correlate between images. It will be shown, by a modified Grassberger-Hentschel-Procaccia approach [Phys. Lett. 97A, 227 (1983); Physica 8D, 435 (1983)] that GL obeys the same scaling law as the Allain-Cloitre lacunarity [Phys. Rev. A 44, 3552 (1991)] but is free of the problems associated with gliding boxes. Several applications are shown from Soil Physics, Surface Science, and other fields.

  6. An IMU Evaluation Method Using a Signal Grafting Scheme.

    PubMed

    Niu, Xiaoji; Wang, Qiang; Li, You; Zhang, Quan; Jiang, Peng

    2016-06-10

    As various inertial measurement units (IMUs) from different manufacturers appear every year, it is not affordable to evaluate every IMU through tests. Therefore, this paper presents an IMU evaluation method by grafting data from the tested IMU to the reference data from a higher-grade IMU. The signal grafting (SG) method has several benefits: (a) only one set of field tests with a higher-grade IMU is needed, and can be used to evaluate numerous IMUs. Thus, SG is effective and economic because all data from the tested IMU is collected in the lab; (b) it is a general approach to compare navigation performances of various IMUs by using the same reference data; and, finally, (c) through SG, one can first evaluate an IMU in the lab, and then decide whether to further test it. Moreover, this paper verified the validity of SG to both medium- and low-grade IMUs, and presents and compared two SG strategies, i.e., the basic-error strategy and the full-error strategy. SG provided results similar to field tests, with a difference of under 5% and 19.4%-26.7% for tested tactical-grade and MEMS IMUs. Meanwhile, it was found that dynamic IMU errors were essential to guarantee the effect of the SG method.

  7. An IMU Evaluation Method Using a Signal Grafting Scheme

    PubMed Central

    Niu, Xiaoji; Wang, Qiang; Li, You; Zhang, Quan; Jiang, Peng

    2016-01-01

    As various inertial measurement units (IMUs) from different manufacturers appear every year, it is not affordable to evaluate every IMU through tests. Therefore, this paper presents an IMU evaluation method by grafting data from the tested IMU to the reference data from a higher-grade IMU. The signal grafting (SG) method has several benefits: (a) only one set of field tests with a higher-grade IMU is needed, and can be used to evaluate numerous IMUs. Thus, SG is effective and economic because all data from the tested IMU is collected in the lab; (b) it is a general approach to compare navigation performances of various IMUs by using the same reference data; and, finally, (c) through SG, one can first evaluate an IMU in the lab, and then decide whether to further test it. Moreover, this paper verified the validity of SG to both medium- and low-grade IMUs, and presents and compared two SG strategies, i.e., the basic-error strategy and the full-error strategy. SG provided results similar to field tests, with a difference of under 5% and 19.4%–26.7% for tested tactical-grade and MEMS IMUs. Meanwhile, it was found that dynamic IMU errors were essential to guarantee the effect of the SG method. PMID:27294932

  8. Interpolation and elevation errors: the impact of the DEM resolution

    NASA Astrophysics Data System (ADS)

    Achilleos, Georgios A.

    2015-06-01

    Digital Elevation Models (DEMs) are developing and evolving at a fast pace, given the progress of computer science and technology. This development though, is not accompanied by an advancement of knowledge on the quality of the models and their indigenous inaccuracy. The user on most occasions is not aware of this quality thus in not aware of the correlating product uncertainty. Extensive research has been conducted - and still is - towards this direction. In the research presented in this paper there is an analysis of elevation errors behavior which are recorded in a DEM. The behavior of these elevation errors, is caused by altering the DEM resolution upon the application of the algorithm interpolation. Contour lines are used as entry data from a topographical map. Elevation errors are calculated in the positions of the initial entry data and wherever the elevation is known. The elevation errors that are recorded, are analyzed, in order to reach conclusions about their distribution and the way in which they occur.

  9. Evaluation methods of a middleware for networked surgical simulations.

    PubMed

    Cai, Qingbo; Liberatore, Vincenzo; Cavuşoğlu, M Cenk; Yoo, Youngjin

    2006-01-01

    Distributed surgical virtual environments are desirable since they substantially extend the accessibility of computational resources by network communication. However, network conditions critically affects the quality of a networked surgical simulation in terms of bandwidth limit, delays, and packet losses, etc. A solution to this problem is to introduce a middleware between the simulation application and the network so that it can take actions to enhance the user-perceived simulation performance. To comprehensively assess the effectiveness of such a middleware, we propose several evaluation methods in this paper, i.e., semi-automatic evaluation, middleware overhead measurement, and usability test.

  10. Comparative evaluation of patellar height methods in the Brazilian population☆

    PubMed Central

    Behrendt, Christian; Zaluski, Alexandre; e Albuquerque, Rodrigo Pires; de Sousa, Eduardo Branco; Cavanellas, Naasson

    2015-01-01

    Objective The methods most used for patellar height measurement were compared with the plateau–patella angle method. Methods A cross-sectional study was conducted, in which lateral-view radiographs of the knee were evaluated using the three methods already established in the literature: Insall–Salvati (IS), Blackburne–Peel (BP) and Caton–Deschamps (CD). These were compared with the plateau–patella angle method. One hundred and ninety-six randomly selected patients were included in the sample. Results The data were initially evaluated using the chi-square test. This analysis was deemed to be positive with p < 0.0001. We compared the traditional methods with the plateau–patella angle measurement, using Fisher's exact test. In comparing the IS index with the plateau–patella angle, we did not find any statistically significant differences in relation to the proportion of altered cases between the two groups. The traditional methods were compared with the plateau–patella angle with regard to the proportions of cases of high and low patella, by means of Fisher's exact test. This analysis showed that the plateau–patella angle identified fewer cases of high patella than did the IS, BP and CD methods, but more cases of low patella. In comparing pairs, we found that the IS and CD indices were capable of identifying more cases of high patella than was the plateau–patella angle. In relation to the cases of low patella, the plateau–patella angle was capable of identifying more cases than were the other three methods. Conclusions The plateau–patella angle found more patients with low patella than did the classical methods and showed results that diverged from those of the other indices studied. PMID:26962492

  11. Retractions of the gingival margins evaluated by holographic methods

    NASA Astrophysics Data System (ADS)

    Sinescu, Cosmin; Negrutiu, Meda Lavinia; Manole, Marius; de Sabata, Aldo; Rusu, Laura-Cristina; Stratul, Stefan; Dudea, Diana; Dughir, Ciprian; Duma, Virgil-Florin

    2015-05-01

    The periodontal disease is one of the most common pathological states of the teeth and gums system. The issue is that its evaluation is a subjective one, i.e. it is based on the skills of the dental medical doctor. As for any clinical condition, a quantitative evaluation and monitoring in time of the retraction of the gingival margins is desired. This phenomenon was evaluated in this study with a holographic method by using a He-Ne laser with a power of 13 mW. The holographic system we have utilized - adapted for dentistry applications - is described. Several patients were considered in a comparative study of their state of health - regarding their oral cavity. The impressions of the maxillary dental arch were taken from a patient during his/her first visit and after a period of six months. The hologram of the first model was superposed on the model cast after the second visit. The retractions of the gingival margins could be thus evaluated three-dimensionally in every point of interest. An evaluation of the retraction has thus been made. Conclusions can thus be drawn for the clinical evaluation of the health of the teeth and gums system of each patient.

  12. Field Evaluation of Personal Sampling Methods for Multiple Bioaerosols

    PubMed Central

    Wang, Chi-Hsun; Chen, Bean T.; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols. PMID:25799419

  13. Initial Results of an MDO Method Evaluation Study

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley MDO method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of re- producible experiments. In the first phase of the study, three MDO methods were implemented in the SIGHT: framework and used to solve a set of ten relatively simple problems. In this paper, we comment on the general considerations for conducting method evaluation studies and report some initial results obtained to date. In particular, although the results are not conclusive because of the small initial test set, other formulations, optimality conditions, and sensitivity of solutions to various perturbations. Optimization algorithms are used to solve a particular MDO formulation. It is then appropriate to speak of local convergence rates and of global convergence properties of an optimization algorithm applied to a specific formulation. An analogous distinction exists in the field of partial differential equations. On the one hand, equations are analyzed in terms of regularity, well-posedness, and the existence and unique- ness of solutions. On the other, one considers numerous algorithms for solving differential equations. The area of MDO methods studies MDO formulations combined with optimization algorithms, although at times the distinction is blurred. It is important to

  14. Description and evaluation of a surface runoff susceptibility mapping method

    NASA Astrophysics Data System (ADS)

    Lagadec, Lilly-Rose; Patrice, Pierre; Braud, Isabelle; Chazelle, Blandine; Moulin, Loïc; Dehotin, Judicaël; Hauchard, Emmanuel; Breil, Pascal

    2016-10-01

    Surface runoff is the hydrological process at the origin of phenomena such as soil erosion, floods out of rivers, mudflows, debris flows and can generate major damage. This paper presents a method to create maps of surface runoff susceptibility. The method, called IRIP (Indicator of Intense Pluvial Runoff, French acronym), uses a combination of landscape factors to create three maps representing the susceptibility (1) to generate, (2) to transfer, and (3) to accumulate surface runoff. The method input data are the topography, the land use and the soil type. The method aims to be simple to implement and robust for any type of study area, with no requirement for calibration or specific input format. In a second part, the paper focuses on the evaluation of the surface runoff susceptibility maps. The method is applied in the Lézarde catchment (210 km2, northern France) and the susceptibility maps are evaluated by comparison with two risk regulatory zonings of surface runoff and soil erosion, and two databases of surface runoff impacts on roads and railways. Comparison tests are performed using a standard verification method for dichotomous forecasting along with five verification indicators: accuracy, bias, success ratio, probability of detection, and false alarm ratio. The evaluation shows that the susceptibility map of surface runoff accumulation is able to identify the concentrated surface runoff flows and that the susceptibility map of transfer is able to identify areas that are susceptible to soil erosion. Concerning the ability of the IRIP method to detect sections of the transportation network susceptible to be impacted by surface runoff, the evaluation tests show promising probabilities of detection (73-90%) but also high false alarm ratios (77-92%). However, a qualitative analysis of the local configuration of the infrastructure shows that taking into account the transportation network vulnerability can explain numerous false alarms. This paper shows that the

  15. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between

  16. Using hybrid method to evaluate the green performance in uncertainty.

    PubMed

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed.

  17. [A modular method for automated evaluation of gait analysis data].

    PubMed

    Loose, T; Malberg, H; Mikut, R; Dieterle, J; Schablowski, M; Wolf, S; Abel, R; Döderlein, L; Rupp, R

    2002-01-01

    A modular methodology for automated gait data evaluation: The aim of Instrumented Gait Analysis is to measure data such as joint kinematics or kinetics during gait in a quantitative way. The data evaluation for clinical purposes is often performed by experienced physicians (diagnosis of specific motion dysfunction, planning and validation of therapy). Due to subjective evaluation and complexity of the pathologies, there exists no objective, standardized data analysis method for these tasks. This article covers the development of a modular, computer-based methodology to quantify the degree of pathological gait in comparison to normal behavior, as well as to automatically search for interpretable gait abnormalities and to visualize the results. The outcomes are demonstrated with two different patient groups.

  18. Tropical-Forest Biomass Dynamics from X-Band, TanDEM-X DATA

    NASA Astrophysics Data System (ADS)

    Treuhaft, R. N.; Neumann, M.; Keller, M. M.; Goncalves, F. G.; Santos, J. R.

    2015-12-01

    The measurement of the change in above ground biomass (AGB) is key for understanding the carbon sink/source nature of tropical forests. Interferometric X-band radar from the only orbiting interferometer, TanDEM-X, shows sensitivity to standing biomass up to at least 300 Mg/ha. This sensitivity may be due in part to the propagation of the shorter X-band wavelength (0.031 m) through holes in the canopy. This talk focuses on estimating the change in AGB over time. Interferometric baselines from TanDEM-X have been obtained in Tapajós National Forest in the Brazilian Amazon over a 4-year period, from 2011 to 2015. Lidar measurements were also acquired during this period. Field measurements of height, height-to-base-of-crown, species, diameter, and position were acquired in 2010, 2013, and 2015. We show interferometric phase height changes, and suggest how these phase height changes are related to biomass change. First we show height changes between baselines separated by one month, over which we expect no change in AGB, to evaluate precision. We find an RMS of <2 m for ~85 stands in the phase height over one month, corresponding to about a 10% measurement of change, which suggests we can detect about a 17 Mg/ha change in AGB at Tapajos. In contrast, interferometric height changes over the period 2011 to 2014 have larger RMS scatters of > 3 m, due to actual change. Most stands show changes in interferometric phase height consistent with regrowth (~10 Mg/ha/yr), and several stands show abrupt, large changes in phase height (>10 m) due to logging and natural disturbance. At the end of 2015, we will acquire more TanDEM-X data over Tapajos, including an area subjected to selective logging. We are doing "before" (March 2015) and "after" (October 2015) fieldwork to be able to understand the signature of change due to selective logging in TanDEM-X interferometric data.

  19. Fast perceptual method for evaluating color scales for Internet visualization

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Rogowitz, Bernice E.

    2001-06-01

    We have developed a fast perceptual method for evaluating color scales for data visualization that uses a monochrome photographic image of a human face as a test pattern. We conducted an experiment in which we applied various color scales to a photographic image of a face and asked observers to rate the naturalness of each image. We found a very strong correlation between the perceived naturalness of the images and the luminance monotonicity of the color scales. Since color scales with monotonic luminance profiles are widely recommended and used for visualizing continuous scalar data, we conclude that using a human face as a test patten provides a quick, simple method for evaluating such color scale in Internet environments.

  20. Evaluating variable selection methods for diagnosis of myocardial infarction.

    PubMed Central

    Dreiseitl, S.; Ohno-Machado, L.; Vinterbo, S.

    1999-01-01

    This paper evaluates the variable selection performed by several machine-learning techniques on a myocardial infarction data set. The focus of this work is to determine which of 43 input variables are considered relevant for prediction of myocardial infarction. The algorithms investigated were logistic regression (with stepwise, forward, and backward selection), backpropagation for multilayer perceptrons (input relevance determination), Bayesian neural networks (automatic relevance determination), and rough sets. An independent method (self-organizing maps) was then used to evaluate and visualize the different subsets of predictor variables. Results show good agreement on some predictors, but also variability among different methods; only one variable was selected by all models. Images Figure 1 PMID:10566358

  1. Evaluation of Polyesterimide Nanocomposites Using Methods of Thermal Analysis

    NASA Astrophysics Data System (ADS)

    Gornicka, B.; Gorecki, L.; Gryzlo, K.; Kaczmarek, D.; Wojcieszak, D.

    2016-02-01

    Polyesterimide resin applied for winding impregnation has been modified by incorporating the hydrophilic and hydrophobic nanosilica, montmorillonite and aluminium oxide. For assessment of the resins in liquid and cured states thermoanalytical methods TG/DSC were used. For pure and nanofilled resins the results of investigation of AFM topography, bond strength, dielectric strength and partial discharge resistance have been also presented. It was found that dielectric and mechanical properties of polyesterimide resin containing hydrophilic silica as well aluminium oxide were much improved as compared to pure resin. Based on our investigations we have found that the methods of thermal analysis may be very useful for evaluation of nanocomposites: DSC/TGA study of resins in the liquid state under dynamic conditions can be applied to pre-select nanocomposites; isothermal TG curves of cured resins can be utilized for thermal stability evaluation; in turn, TG study after thermal ageing of cured resins could confirm the barrier properties of nanocomposites.

  2. Public health surveillance: historical origins, methods and evaluation.

    PubMed Central

    Declich, S.; Carter, A. O.

    1994-01-01

    In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented. PMID:8205649

  3. A Method for Evaluating Volt-VAR Optimization Field Demonstrations

    SciTech Connect

    Schneider, Kevin P.; Weaver, T. F.

    2014-08-31

    In a regulated business environment a utility must be able to validate that deployed technologies provide quantifiable benefits to the end-use customers. For traditional technologies there are well established procedures for determining what benefits will be derived from the deployment. But for many emerging technologies procedures for determining benefits are less clear and completely absent in some cases. Volt-VAR Optimization is a technology that is being deployed across the nation, but there are still numerous discussions about potential benefits and how they are achieved. This paper will present a method for the evaluation, and quantification of benefits, for field deployments of Volt-VAR Optimization technologies. In addition to the basic methodology, the paper will present a summary of results, and observations, from two separate Volt-VAR Optimization field evaluations using the proposed method.

  4. Performance evaluation of electrorheological fluid using acoustic method

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyu; Gong, Xun; Qin, Chuanxi; Zhang, De; Zhang, Dong; Wu, Haodong

    2016-12-01

    This paper presents a simple method to evaluate the performance of electrorheological fluid (ERF) with acoustic parameters. The key parameters, such as the storage modulus and the loss modulus are obtained by ultrasonic shear wave reflectometry. By comparing and analyzing the value of the ratio of these two shear wave parameters, it could be easily and quickly to evaluate the performance of ERF. Four kinds of electrode surface morphology are chosen to vary the performance of ERF in this paper. The experimental results show that the optimum ERF system, for example the electrode surface morphology and DC electric field etc, are easily to be found by this method. Furthermore, the influence of leaky current is discussed also.

  5. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  6. Laboratory-scale evaluations of alternative plutonium precipitation methods

    SciTech Connect

    Martella, L.L.; Saba, M.T.; Campbell, G.K.

    1984-02-08

    Plutonium(III), (IV), and (VI) carbonate; plutonium(III) fluoride; plutonium(III) and (IV) oxalate; and plutonium(IV) and (VI) hydroxide precipitation methods were evaluated for conversion of plutonium nitrate anion-exchange eluate to a solid, and compared with the current plutonium peroxide precipitation method used at Rocky Flats. Plutonium(III) and (IV) oxalate, plutonium(III) fluoride, and plutonium(IV) hydroxide precipitations were the most effective of the alternative conversion methods tested because of the larger particle-size formation, faster filtration rates, and the low plutonium loss to the filtrate. These were found to be as efficient as, and in some cases more efficient than, the peroxide method. 18 references, 14 figures, 3 tables.

  7. Comparing three methods for evaluating impact wrench vibration emissions.

    PubMed

    McDowell, Thomas W; Marcotte, Pierre; Warren, Cristopher; Welcome, Daniel E; Dong, Ren G

    2009-08-01

    To provide a means for comparing impact wrenches and similar tools, the international standard ISO 8662-7 prescribes a method for measuring the vibrations at the handles of tools during their operations against a cotton-phenolic braking device. To improve the standard, alternative loading mechanisms have been proposed; one device comprises aluminum blocks with friction brake linings, while another features plate-mounted bolts to provide the tool load. The objective of this study was to evaluate these three loading methods so that tool evaluators can select appropriate loading devices in order to obtain results that can be applied to their specific workplace operations. Six experienced tool operators used five tool models to evaluate the loading mechanisms. The results of this study indicate that different loads can yield different tool comparison results. However, any of the three devices appears to be adequate for initial tool screenings. On the other hand, vibration emissions measured in the laboratory are unlikely to be fully representative of those in the workplace. Therefore, for final tool selections and for reliably assessing workplace vibration exposures, vibration measurements should be collected under actual working conditions. Evaluators need to use appropriate numbers of tools and tool operators in their assessments; recommendations are provided.

  8. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  9. Semi-automatic method for routine evaluation of fibrinolytic components

    PubMed Central

    Collen, D.; Tytgat, G.; Verstraete, M.

    1968-01-01

    A semi-automatic method for the routine evaluation of fibrinolytic activity is described. The principle is based upon graphic recording by a multichannel voltmeter of tension drops over a potentiometer, caused by variations in the influence of light upon a light-dependent resistance, resulting from modifications in the composition of the fibrin fibres by lysis. The method is applied to the assessment of certain fibrinolytic factors with widespread fibrinolytic endpoints, and the results are compared with simultaneously obtained visual data on the plasmin assay, the plasminogen assay, and on the euglobulin clot lysis time. Images PMID:4237924

  10. Evaluation of Cleanliness Test Methods for Spacecraft PCB Assemblies

    NASA Astrophysics Data System (ADS)

    Tegehall, P.-E.; Dunn, B. D.

    2006-10-01

    Ionic contamination on printed-circuit-board assemblies may cause current leakage and short-circuits. The present cleanliness requirement in ECSS-Q-70-08, "The manual soldering of high-reliability electrical connections", is that the ionic contamination shall be less than 1.56 fl-glcm2 NaCI equivalents. The relevance of the method used for measurement of the ionic contamination level, resistivity of solvent extract, has been questioned. Alternative methods are ion chromatography and measurement of surface insulation resistance, but these methods also have their drawbacks. These methods are first described and their advantages and drawbacks are discussed. This is followed by an experimental evaluation of the three methods. This was done by soldering test vehicles at four manufacturers of space electronics using their ordinary processes for soldering and cleaning printed board assemblies. The experimental evaluation showed that the ionic contamination added by the four assemblers was very small and well below the acceptance criterion in ECSS-Q-70-80. Ion-chromatography analysis showed that most of the ionic contamination on the cleaned assembled boards originated from the hot-oil fusing of the printed circuit boards. Also, the surface insulation resistance was higher on the assembled boards compared to the bare printed circuit boards. Since strongly activated fluxes are normally used when printed circuit boards are hot-oil fused, it is essential that they are thoroughly cleaned in order to achieve low contamination levels on the final printed-board assemblies.

  11. Analysis and Validation of Grid dem Generation Based on Gaussian Markov Random Field

    NASA Astrophysics Data System (ADS)

    Aguilar, F. J.; Aguilar, M. A.; Blanco, J. L.; Nemmaoui, A.; García Lorca, A. M.

    2016-06-01

    Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m2) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.

  12. Best estimate method versus evaluation method: a comparison of two techniques in evaluating seismic analysis and design. Technical report

    SciTech Connect

    Bumpus, S.E.; Johnson, J.J.; Smith, P.D.

    1980-07-01

    The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the tradditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC)--seismic input, soil-structure interaction, major structural response, and subsystem response--are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on the model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evauation Method is also demonstrated.

  13. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration.

  14. Evaluation of five decontamination methods for filtering facepiece respirators.

    PubMed

    Viscusi, Dennis J; Bergman, Michael S; Eimer, Benjamin C; Shaffer, Ronald E

    2009-11-01

    Concerns have been raised regarding the availability of National Institute for Occupational Safety and Health (NIOSH)-certified N95 filtering facepiece respirators (FFRs) during an influenza pandemic. One possible strategy to mitigate a respirator shortage is to reuse FFRs following a biological decontamination process to render infectious material on the FFR inactive. However, little data exist on the effects of decontamination methods on respirator integrity and performance. This study evaluated five decontamination methods [ultraviolet germicidal irradiation (UVGI), ethylene oxide, vaporized hydrogen peroxide (VHP), microwave oven irradiation, and bleach] using nine models of NIOSH-certified respirators (three models each of N95 FFRs, surgical N95 respirators, and P100 FFRs) to determine which methods should be considered for future research studies. Following treatment by each decontamination method, the FFRs were evaluated for changes in physical appearance, odor, and laboratory performance (filter aerosol penetration and filter airflow resistance). Additional experiments (dry heat laboratory oven exposures, off-gassing, and FFR hydrophobicity) were subsequently conducted to better understand material properties and possible health risks to the respirator user following decontamination. However, this study did not assess the efficiency of the decontamination methods to inactivate viable microorganisms. Microwave oven irradiation melted samples from two FFR models. The remainder of the FFR samples that had been decontaminated had expected levels of filter aerosol penetration and filter airflow resistance. The scent of bleach remained noticeable following overnight drying and low levels of chlorine gas were found to off-gas from bleach-decontaminated FFRs when rehydrated with deionized water. UVGI, ethylene oxide (EtO), and VHP were found to be the most promising decontamination methods; however, concerns remain about the throughput capabilities for EtO and VHP

  15. Tectonic development of the Northwest Bonaparte Basin, Australia by using Digital Elevation Model (DEM)

    NASA Astrophysics Data System (ADS)

    Wahid, Ali; Salim, Ahmed Mohamed Ahmed; Ragab Gaafar, Gamal; Yusoff, AP Wan Ismail Wan

    2016-02-01

    The Bonaparte Basin consist of majorly offshore part is situated at Australia's NW continental margin, covers an area of approx. 270,000km2. Bonaparte Basin having a number of sub-basins and platform areas of Paleozoic and Mesozoic is structurally complex. This research established the geologic and geomorphologic studies using Digital Elevation Model (DEM) as a substitute approach in morphostructural analysis to unravel the geological complexities. Although DEMs have been in practice since 1990s, they still have not become common tool for mapping studies. The research work comprised of regional structural analysis with the help of integrated elevation data, satellite imageries, available open topograhic images and internal geological maps with interpreted seismic. The structural maps of the study area have been geo-referenced which further overlaid onto SRTM data and satellite images for combined interpretation which facilitate to attain Digital Elevation Model of the study area. The methodology adopts is to evaluate and redefine development of geodynamic processes involved in formation of Bonaparte Basin. The main objectives is to establish the geological histories by using digital elevation model. The research work will be useful to incorporate different tectonic events occurred at different Geological times in a digital elevation model. The integrated tectonic analysis of different digital data sets benefitted substantially from combining them into a common digital database. Whereas, the visualization software facilitates the overlay and combined interpretation of different data sets which is helpful to reveal hidden information not obvious or accessible otherwise for regional analysis.

  16. Improvement of dem Generation from Aster Images Using Satellite Jitter Estimation and Open Source Implementation

    NASA Astrophysics Data System (ADS)

    Girod, L.; Nuth, C.; Kääb, A.

    2015-12-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a source of stereoscopic images covering the whole globe at a 15m resolution at a consistent quality for over 15 years. The potential of this data in terms of geomorphological analysis and change detection in three dimensions is unrivaled and needs to be exploited. However, the quality of the DEMs and ortho-images currently delivered by NASA (ASTER DMO products) is often of insufficient quality for a number of applications such as mountain glacier mass balance. For this study, the use of Ground Control Points (GCPs) or of other ground truth was rejected due to the global "big data" type of processing that we hope to perform on the ASTER archive. We have therefore developed a tool to compute Rational Polynomial Coefficient (RPC) models from the ASTER metadata and a method improving the quality of the matching by identifying and correcting jitter induced cross-track parallax errors. Our method outputs more accurate DEMs with less unmatched areas and reduced overall noise. The algorithms were implemented in the open source photogrammetric library and software suite MicMac.

  17. Potentials of TanDEM-X Interferometric Data for Global Forest/Non-Forest Classification

    NASA Astrophysics Data System (ADS)

    Martone, Michele; Rizzoli, Paola; Brautigam, Benjamin; Krieger, Gerhard

    2016-08-01

    This paper presents a method to generate forest/non- forest maps from TanDEM-X interferometric SAR data. Among the several contributions which may affect the quality of interferometric products, the coherence loss caused by volume scattering represents the contribution which is predominantly affected by the presence of vegetation, and is therefore here exploited as main indicator for forest classification. Due to the strong dependency of the considered InSAR quantity on the geometric acquisition configuration, namely the incidence angle and the interferometric baseline, a multi-fuzzy clustering classification approach is used. Some examples are provided which show the potential of the proposed method. Further, additional features such as urban settlements, water, and critical areas affected by geometrical distortions (e.g. shadow and layover) need to be extracted, and possible approaches are presented as well. Very promising results are shown, which demonstrate the potentials of TanDEM-X bistatic data not only for forest identification, but, more in general, for the generation of a global land classification map as a next step.

  18. Lunar-base construction equipment and methods evaluation

    NASA Technical Reports Server (NTRS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-01-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  19. [Imputation methods for missing data in educational diagnostic evaluation].

    PubMed

    Fernández-Alonso, Rubén; Suárez-Álvarez, Javier; Muñiz, José

    2012-02-01

    In the diagnostic evaluation of educational systems, self-reports are commonly used to collect data, both cognitive and orectic. For various reasons, in these self-reports, some of the students' data are frequently missing. The main goal of this research is to compare the performance of different imputation methods for missing data in the context of the evaluation of educational systems. On an empirical database of 5,000 subjects, 72 conditions were simulated: three levels of missing data, three types of loss mechanisms, and eight methods of imputation. The levels of missing data were 5%, 10%, and 20%. The loss mechanisms were set at: Missing completely at random, moderately conditioned, and strongly conditioned. The eight imputation methods used were: listwise deletion, replacement by the mean of the scale, by the item mean, the subject mean, the corrected subject mean, multiple regression, and Expectation-Maximization (EM) algorithm, with and without auxiliary variables. The results indicate that the recovery of the data is more accurate when using an appropriate combination of different methods of recovering lost data. When a case is incomplete, the mean of the subject works very well, whereas for completely lost data, multiple imputation with the EM algorithm is recommended. The use of this combination is especially recommended when data loss is greater and its loss mechanism is more conditioned. Lastly, the results are discussed, and some future lines of research are analyzed.

  20. Evaluation of Methods to Estimate Understory Fruit Biomass

    PubMed Central

    Lashley, Marcus A.; Thompson, Jeffrey R.; Chitwood, M. Colter; DePerno, Christopher S.; Moorman, Christopher E.

    2014-01-01

    Fleshy fruit is consumed by many wildlife species and is a critical component of forest ecosystems. Because fruit production may change quickly during forest succession, frequent monitoring of fruit biomass may be needed to better understand shifts in wildlife habitat quality. Yet, designing a fruit sampling protocol that is executable on a frequent basis may be difficult, and knowledge of accuracy within monitoring protocols is lacking. We evaluated the accuracy and efficiency of 3 methods to estimate understory fruit biomass (Fruit Count, Stem Density, and Plant Coverage). The Fruit Count method requires visual counts of fruit to estimate fruit biomass. The Stem Density method uses counts of all stems of fruit producing species to estimate fruit biomass. The Plant Coverage method uses land coverage of fruit producing species to estimate fruit biomass. Using linear regression models under a censored-normal distribution, we determined the Fruit Count and Stem Density methods could accurately estimate fruit biomass; however, when comparing AIC values between models, the Fruit Count method was the superior method for estimating fruit biomass. After determining that Fruit Count was the superior method to accurately estimate fruit biomass, we conducted additional analyses to determine the sampling intensity (i.e., percentage of area) necessary to accurately estimate fruit biomass. The Fruit Count method accurately estimated fruit biomass at a 0.8% sampling intensity. In some cases, sampling 0.8% of an area may not be feasible. In these cases, we suggest sampling understory fruit production with the Fruit Count method at the greatest feasible sampling intensity, which could be valuable to assess annual fluctuations in fruit production. PMID:24819253

  1. Glacial Surface Topography and its Changes in the Western Qilian Mountains Derived from TanDEM-X Bi-Static InSAR

    NASA Astrophysics Data System (ADS)

    Sun, Yafei; Jiang, Liming; Liu, Lin; Wang, Hansheng; Hsu, Houtse; Shen, Qiang

    2016-08-01

    The high-resolution and high-precision glacier surface topography is one of the most important fundamental data for the research of glacial dynamic process of mountain glaciers. It is noteworthy that the TanDEM- X mission, launched in 2010 by the German Aerospace Center (DLR), opens a new era in single-pass satellite SAR remote sensing [1]. The TanDEM-X (TDX) mission employs a bi-static interferometric configuration of the two identical satellites TerraSAR-X (TSX) and TDX flying in a closely controlled formation, the primary objective of which is to generate a global, high-accurate, and homogeneous DEM following the high standard accuracy HRTI-3 [1].In this study, we aim to quantitatively evaluate the potential of the TDX bi-static SAR data for measuring glacier surface topography and elevation changes over mountain regions.

  2. Evaluation of Alternate Stainless Steel Surface Passivation Methods

    SciTech Connect

    Clark, Elliot A.

    2005-05-31

    Stainless steel containers were assembled from parts passivated by four commercial vendors using three passivation methods. The performance of these containers in storing hydrogen isotope mixtures was evaluated by monitoring the composition of initially 50% H{sub 2} 50% D{sub 2} gas with time using mass spectroscopy. Commercial passivation by electropolishing appears to result in surfaces that do not catalyze hydrogen isotope exchange. This method of surface passivation shows promise for tritium service, and should be studied further and considered for use. On the other hand, nitric acid passivation and citric acid passivation may not result in surfaces that do not catalyze the isotope exchange reaction H{sub 2} + D{sub 2} {yields} 2HD. These methods should not be considered to replace the proprietary passivation processes of the two current vendors used at the Savannah River Site Tritium Facility.

  3. Evaluation of Two Fractal Methods for Magnetogram Image Analysis

    NASA Technical Reports Server (NTRS)

    Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.

    1997-01-01

    Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.

  4. Analysis of Photovoltaic System Energy Performance Evaluation Method

    SciTech Connect

    Kurtz, S.; Newmiller, J.; Kimber, A.; Flottemesch, R.; Riley, E.; Dierauf, T.; McKee, J.; Krishnani, P.

    2013-11-01

    Documentation of the energy yield of a large photovoltaic (PV) system over a substantial period can be useful to measure a performance guarantee, as an assessment of the health of the system, for verification of a performance model to then be applied to a new system, or for a variety of other purposes. Although the measurement of this performance metric might appear to be straight forward, there are a number of subtleties associated with variations in weather and imperfect data collection that complicate the determination and data analysis. A performance assessment is most valuable when it is completed with a very low uncertainty and when the subtleties are systematically addressed, yet currently no standard exists to guide this process. This report summarizes a draft methodology for an Energy Performance Evaluation Method, the philosophy behind the draft method, and the lessons that were learned by implementing the method.

  5. A practical method to evaluate radiofrequency exposure of mast workers.

    PubMed

    Alanko, Tommi; Hietanen, Maila

    2008-01-01

    Assessment of occupational exposure to radiofrequency (RF) fields in telecommunication transmitter masts is a challenging task. For conventional field strength measurements using manually operated instruments, it is difficult to document the locations of measurements while climbing up a mast. Logging RF dosemeters worn by the workers, on the other hand, do not give any information about the location of the exposure. In this study, a practical method was developed and applied to assess mast workers' exposure to RF fields and the corresponding location. This method uses a logging dosemeter for personal RF exposure evaluation and two logging barometers to determine the corresponding height of the worker's position on the mast. The procedure is not intended to be used for compliance assessments, but to indicate locations where stricter assessments are needed. The applicability of the method is demonstrated by making measurements in a TV and radio transmitting mast.

  6. Evaluation of Methods to Predict Reactivity of Gold Nanoparticles

    SciTech Connect

    Allison, Thomas C.; Tong, Yu ye J.

    2011-06-20

    Several methods have appeared in the literature for predicting reactivity on metallic surfaces and on the surface of metallic nanoparticles. All of these methods have some relationship to the concept of frontier molecular orbital theory. The d-band theory of Hammer and Nørskov is perhaps the most widely used predictor of reactivity on metallic surfaces, and it has been successfully applied in many cases. Use of the Fukui function and the condensed Fukui function is well established in organic chemistry, but has not been so widely applied in predicting the reactivity of metallic nanoclusters. In this article, we will evaluate the usefulness of the condensed Fukui function in predicting the reactivity of a family of cubo-octahedral gold nanoparticles and make comparison with the d-band method.

  7. An in vitro method for evaluating vascular endothelial ADPase activity.

    PubMed

    Caprino, L; Togna, A R; Stella, C; Togna, G

    1996-06-01

    Some xenobiotics, known to promote the development of thrombotic phenomena, affect vascular endothelium ADPase, a regulatory enzyme that inactivates vaso- and platelet-active adenine nucleotides. This proposed new experimental approach represents an improved method of evaluation of vascular endothelial ADPase activity which is assessed by measuring, at pre-established times, the degradation rate of exogenous ADP incubated with aortic bovine patches. The ADP dosage was performed by using a spectrophotometric enzymatic assay. Statistical analyses showed that the method is capable of highlighting the linearity of the ADPase activity time-course, thus indicating that the slopes of time-degradation curves of ADP are a valid index for this endothelial ectoenzyme activity. Results obtained with ADPase inhibiting or stimulating agent confirm that this in vitro method is an efficient tool for estimating the ability of xenobiotics or drugs to modify the nonthrombogenic properties of vascular endothelium.

  8. Childhood Obesity Research Demonstration Project: Cross-Site Evaluation Methods

    PubMed Central

    Lee, Rebecca E.; Mehta, Paras; Thompson, Debbe; Bhargava, Alok; Carlson, Coleen; Kao, Dennis; Layne, Charles S.; Ledoux, Tracey; O'Connor, Teresia; Rifai, Hanadi; Gulley, Lauren; Hallett, Allen M.; Kudia, Ousswa; Joseph, Sitara; Modelska, Maria; Ortega, Dana; Parker, Nathan; Stevens, Andria

    2015-01-01

    Abstract Introduction: The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which the CORD model is associated with changes in behavior, body weight, BMI, quality of life, and healthcare satisfaction in children 2–12 years of age. Design/Methods: The CORD Evaluation Center (EC-CORD) will analyze the pooled data from three independent demonstration projects that each integrate public health and primary care childhood obesity interventions. An extensive set of common measures at the family, facility, and community levels were defined by consensus among the CORD projects and EC-CORD. Process evaluation will assess reach, dose delivered, and fidelity of intervention components. Impact evaluation will use a mixed linear models approach to account for heterogeneity among project-site populations and interventions. Sustainability evaluation will assess the potential for replicability, continuation of benefits beyond the funding period, institutionalization of the intervention activities, and community capacity to support ongoing program delivery. Finally, cost analyses will assess how much benefit can potentially be gained per dollar invested in programs based on the CORD model. Conclusions: The keys to combining and analyzing data across multiple projects include the CORD model framework and common measures for the behavioral and health outcomes along with important covariates at the individual, setting, and community levels. The overall objective of the comprehensive evaluation will develop evidence-based recommendations for replicating and disseminating community-wide, integrated public health and primary care programs based on the CORD model. PMID:25679060

  9. Systematic review of methods for evaluating healthcare research economic impact

    PubMed Central

    2010-01-01

    Background The economic benefits of healthcare research require study so that appropriate resources can be allocated to this research, particularly in developing countries. As a first step, we performed a systematic review to identify the methods used to assess the economic impact of healthcare research, and the outcomes. Method An electronic search was conducted in relevant databases using a combination of specific keywords. In addition, 21 relevant Web sites were identified. Results The initial search yielded 8,416 articles. After studying titles, abstracts, and full texts, 18 articles were included in the analysis. Eleven other reports were found on Web sites. We found that the outcomes assessed as healthcare research payback included direct cost-savings, cost reductions in healthcare delivery systems, benefits from commercial advancement, and outcomes associated with improved health status. Two methods were used to study healthcare research payback: macro-economic studies, which examine the relationship between research studies and economic outcome at the aggregated level, and case studies, which examine specific research projects to assess economic impact. Conclusions Our study shows that different methods and outcomes can be used to assess the economic impacts of healthcare research. There is no unique methodological approach for the economic evaluation of such research. In our systematic search we found no research that had evaluated the economic return of research in low and middle income countries. We therefore recommend a consensus on practical guidelines at international level on the basis of more comprehensive methodologies (such as Canadian Academic of Health Science and payback frameworks) in order to build capacity, arrange for necessary informative infrastructures and promote necessary skills for economic evaluation studies. PMID:20196839

  10. Methods to monitor and evaluate household waste prevention.

    PubMed

    Sharp, Veronica; Giorgi, Sara; Wilson, David C

    2010-03-01

    This paper presents one strand of the findings from a comprehensive synthesis review of the policy-relevant evidence on household waste prevention. The focus herein is on how to measure waste prevention: it is always difficult to measure what is not there. Yet reliable and robust monitoring and evaluation of household waste prevention interventions is essential, to enable policy makers, local authorities and practitioners to: (a) collect robust and high quality data; (b) ensure robust decisions are made about where to prioritize resources; and (c) ensure that waste prevention initiatives are being effective and delivering behaviour change. The evidence reveals a range of methods for monitoring and evaluation, including self-weighing; pre- and post-intervention surveys, focusing on attitudes and behaviours and/or on participation rates; tracking waste arisings via collection data and/or compositional analysis; and estimation/modelling. There appears to be an emerging consensus that no single approach is sufficient on its own, rather a 'hybrid' method using a suite of monitoring approaches - usually including surveys, waste tonnage data and monitoring of campaigns - is recommended. The evidence concurs that there is no benefit in trying to further collate evidence from past waste prevention projects, other than to establish, in a few selected cases, if waste prevention behaviour has been sustained beyond cessation of the active intervention campaign. A more promising way forward is to ensure that new intervention campaigns are properly evaluated and that the evidence is captured and collated into a common resource.

  11. A method to evaluate hydraulic fracture using proppant detection.

    PubMed

    Liu, Juntao; Zhang, Feng; Gardner, Robin P; Hou, Guojing; Zhang, Quanying; Li, Hu

    2015-11-01

    Accurate determination of the proppant placement and propped fracture height are important for evaluating and optimizing stimulation strategies. A technology using non-radioactive proppant and a pulsed neutron gamma energy spectra logging tool to determine the placement and height of propped fractures is proposed. Gd2O3 was incorporated into ceramic proppant and a Monte Carlo method was utilized to build the logging tools and formation models. Characteristic responses of the recorded information of different logging tools to fracture widths, proppant concentrations and influencing factors were studied. The results show that Gd capture gamma rays can be used to evaluate propped fractures and it has higher sensitivity to the change of fracture width and traceable proppant content compared with the exiting non-radioactive proppant evaluation techniques and only an after-fracture measurement is needed for the new method; The changes in gas saturation and borehole size have a great impact on determining propped fractures when compensated neutron and pulsed neutron capture tool are used. A field example is presented to validate the application of the new technique.

  12. Performance evaluation of fault detection methods for wastewater treatment processes.

    PubMed

    Corominas, Lluís; Villez, Kris; Aguado, Daniel; Rieger, Leiv; Rosén, Christian; Vanrolleghem, Peter A

    2011-02-01

    Several methods to detect faults have been developed in various fields, mainly in chemical and process engineering. However, minimal practical guidelines exist for their selection and application. This work presents an index that allows for evaluating monitoring and diagnosis performance of fault detection methods, which takes into account several characteristics, such as false alarms, false acceptance, and undesirable switching from correct detection to non-detection during a fault event. The usefulness of the index to process engineering is demonstrated first by application to a simple example. Then, it is used to compare five univariate fault detection methods (Shewhart, EWMA, and residuals of EWMA) applied to the simulated results of the Benchmark Simulation Model No. 1 long-term (BSM1_LT). The BSM1_LT, provided by the IWA Task Group on Benchmarking of Control Strategies, is a simulation platform that allows for creating sensor and actuator faults and process disturbances in a wastewater treatment plant. The results from the method comparison using BSM1_LT show better performance to detect a sensor measurement shift for adaptive methods (residuals of EWMA) and when monitoring the actuator signals in a control loop (e.g., airflow). Overall, the proposed index is able to screen fault detection methods.

  13. Evaluation of Perrhenate Spectrophotometric Methods in Bicarbonate and Nitrate Media.

    PubMed

    Lenell, Brian A; Arai, Yuji

    2016-04-01

    2-pyridyl thiourea and methyl-2-pyridyl ketoxime based perrhenate, Re(VII), UV-vis spectrophotometric methods were evaluated in nitrate and bicarbonate solutions ranging from 0.001 M to 0.5 M. Standard curves at [Re]=2.5-50 mg L(-1) for the Re(IV)-thiourea and the Re ketoxime complexes were constructed at 405 nm and 490 nm, respectively. Detection of limits for N-(2-pyridyl) thiourea and methyl-2-pyridyl ketoxime methods in ultrapure water are 3.06 mg/L and 4.03 mg/L, respectively. Influences of NaHCO3 and NaNO3 concentration on absorbance spectra, absorptivity, and linearity were documented. For both methods, samples in ultrapure water and NaHCO3 have an R(2) value>0.99, indicating strong linear relationships. Statistical analysis supports that NaHCO3 does not affect linearity between standards for either method. NaNO3 causes major interference with the ketoxime method above 0.001 M NaNO3. Data provides information for practical use of Re spectrophotometric methods in environmental media that is high in bicarbonate and nitrate.

  14. A simple method for evaluating data from an interlaboratory study.

    PubMed

    Horwitz, W; Britton, P; Chirtel, S J

    1998-01-01

    Large-scale laboratory- and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT = [experimentally found among-laboratories relative standard deviation] divided by [relative standard deviation calculated from the Horwitz formula]). The chemical analytical method is deemed acceptable per se if HORRAT approximately 1.0 (+/- 0.5). If HORRAT is > or approximately 2.0, the most extreme values are removed successively until an "acceptable" ratio is obtained. The laboratories responsible for the extreme values that are removed should examine their technique and procedures. If > or approximately 15% of the values have to be removed, the instructions and the methods should be examined. This suggested computation procedure is simple and does not require statistical outlier tables. Proposed action limits may be adjusted according to experience. Data supporting U.S. Environmental Protection Agency method 245.1 for mercury in waters (manual cold-vapor atomic absorption spectrometry), supplemented by subsequent laboratory-performance data, were reexamined in this manner. Method-performance parameters (means and among-laboratories relative standard deviations) were comparable with results from the original statistical analysis that used a robust biweight procedure for outlier removal. The precision of the current controlled performance is better by a factor of 4 than that of estimates resulting from the original method-performance study, at the expense of rejecting more experimental values as outliers.

  15. Assay of cerebrospinal fluid protein: a rate biuret method evaluated.

    PubMed

    Finley, P R; Williams, R J

    1983-01-01

    We evaluated a rate colorimetric method (Beckman) for measuring total protein in cerebrospinal fluid. The automated instrument we used was Beckman's ASTRA TM. A 100-microL sample of spinal fluid is introduced into the biuret reagent in the reaction cell and the increase in absorbance at 545 nm is monitored for 20.5 s. Solid-state circuits determine the rate of alkaline biuret-protein chelate formation, which is directly proportional to the total protein concentration in the sample. The linear range of measurement is 120 to 7500 mg/L. Day-to-day precision (CV) over the range of 150 to 1200 mg/L ranged from 15.2 to 2.3%. The method was unaffected by radical alteration of the albumin/globulin ratio, but there is a positive interference in the presence of hemoglobin, a suppression in the presence of bilirubin, and no effect by xanthochromia. The method is precise, accurate, rapid, and convenient. The method was compared with the trichloroacetic acid method as performed on the Du Pont aca III, giving a correlation coefficient (r2) of 0.9693. The method is precise, accurate, rapid, and convenient.

  16. Review and evaluation of metallic TRU nuclear waste consolidation methods

    SciTech Connect

    Montgomery, D.R.; Nesbitt, J.F.

    1983-08-01

    The US Department of Energy established the Commercial Waste Treatment Program to develop, demonstrate, and deploy waste treatment technology. In this report, viable methods are identified that could consolidate the volume of metallic wastes generated in a fuel reprocessing facility. The purpose of this study is to identify, evaluate, and rate processes that have been or could be used to reduce the volume of contaminated/irradiated metallic waste streams and to produce an acceptable waste form in a safe and cost-effective process. A technical comparative evaluation of various consolidation processes was conducted, and these processes were rated as to the feasibility and cost of producing a viable product from a remotely operated radioactive process facility. Out of the wide variety of melting concepts and consolidation systems that might be applicable for consolidating metallic nuclear wastes, the following processes were selected for evaluation: inductoslay melting, rotating nonconsumable electrode melting, plasma arc melting, electroslag melting with two nonconsumable electrodes, vacuum coreless induction melting, and cold compaction. Each process was evaluated and rated on the criteria of complexity of process, state and type of development required, safety, process requirements, and facility requirements. It was concluded that the vacuum coreless induction melting process is the most viable process to consolidate nuclear metallic wastes. 11 references.

  17. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    DOEpatents

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  18. Research on the Comparability of Multi-attribute Evaluation Methods for Academic Journals

    NASA Astrophysics Data System (ADS)

    Liping, Yu

    This paper first constructs a classification framework for multi-attribute evaluation methods oriented to academic journals, and then discusses the comparability of the vast majority of non-linear evaluation methods and the majority of linear evaluation methods theoretically, taking the TOPSIS method as an example and the evaluation data on agricultural journals as an exercise of validation. The analysis result shows that we should attach enough importance to the comparability of evaluation methods for academic journals; the evaluation objectives are closely related to the choice of evaluation methods, and also relevant to the comparability of evaluation methods; the specialized organizations for journal evaluation had better release the evaluation data, evaluation methods and evaluation results to the best of their abilities; only purely subjective evaluation method is of broad comparability.

  19. Quantitative evaluation of material degradation by Barkhausen noise method

    SciTech Connect

    Yamaguchi, Atsunori; Maeda, Noriyoshi; Sugibayashi, Takuya

    1995-12-01

    Evaluation the life of nuclear power plant becomes inevitable to extend the plant operating period. This paper applied the magnetic method using Barkhausen noise (BHN) to detect the degradation by fatigue and thermal aging. Low alloy steel (SA 508 cl.2) was fatigued at the strain amplitudes of {+-}1% and {+-}0.4%, and duplex stainless steel (SCS14A) was heated at 400 C for a long period (thermal aging). For the degraded material by thermal aging, BHN was measured and good correlation between magnetic properties and absorption energy of the material was obtained. For fatigued material, BHNM was measured at each predetermined cycle and the effect of stress or strain of the material when it measured was evaluated, and good correlation between BHN and fatigue damage ratio was obtained.

  20. Helicobacter pylori identification: a diagnostic/confirmatory method for evaluation.

    PubMed

    Mesquita, B; Gonçalves, M J; Pacheco, P; Lopes, J; Salazar, F; Relvas, M; Coelho, C; Pacheco, J J; Velazco, C

    2014-09-01

    The Helicobacter pylori extra gastric reservoir is probably the oral cavity. In order to evaluate the presence of this bacterium in patients with periodontitis and suspicious microbial cultures, saliva was collected from these and non-periodontitis subjects. PCRs targeting 16S rRNA gene and a 860 bp specific region were performed, and digested with the restriction enzyme DdeI. We observed that the PCR-RFLP approach augments the accuracy from 26.2 % (16/61), found in the PCR-based results, to 42.6 % (26/61), which is an excellent indicator for the establishment of this low-cost procedure as a diagnostic/confirmatory method for H. pylori evaluation.

  1. [Evaluation of using statistical methods in selected national medical journals].

    PubMed

    Sych, Z

    1996-01-01

    The paper covers the performed evaluation of frequency with which the statistical methods were applied in analyzed works having been published in six selected, national medical journals in the years 1988-1992. For analysis the following journals were chosen, namely: Klinika Oczna, Medycyna Pracy, Pediatria Polska, Polski Tygodnik Lekarski, Roczniki Państwowego Zakładu Higieny, Zdrowie Publiczne. Appropriate number of works up to the average in the remaining medical journals was randomly selected from respective volumes of Pol. Tyg. Lek. The studies did not include works wherein the statistical analysis was not implemented, which referred both to national and international publications. That exemption was also extended to review papers, casuistic ones, reviews of books, handbooks, monographies, reports from scientific congresses, as well as papers on historical topics. The number of works was defined in each volume. Next, analysis was performed to establish the mode of finding out a suitable sample in respective studies, differentiating two categories: random and target selections. Attention was also paid to the presence of control sample in the individual works. In the analysis attention was also focussed on the existence of sample characteristics, setting up three categories: complete, partial and lacking. In evaluating the analyzed works an effort was made to present the results of studies in tables and figures (Tab. 1, 3). Analysis was accomplished with regard to the rate of employing statistical methods in analyzed works in relevant volumes of six selected, national medical journals for the years 1988-1992, simultaneously determining the number of works, in which no statistical methods were used. Concurrently the frequency of applying the individual statistical methods was analyzed in the scrutinized works. Prominence was given to fundamental statistical methods in the field of descriptive statistics (measures of position, measures of dispersion) as well as

  2. Evaluation of pediatric manual wheelchair mobility using advanced biomechanical methods.

    PubMed

    Slavens, Brooke A; Schnorenberg, Alyssa J; Aurit, Christine M; Graf, Adam; Krzak, Joseph J; Reiners, Kathryn; Vogel, Lawrence C; Harris, Gerald F

    2015-01-01

    There is minimal research of upper extremity joint dynamics during pediatric wheelchair mobility despite the large number of children using manual wheelchairs. Special concern arises with the pediatric population, particularly in regard to the longer duration of wheelchair use, joint integrity, participation and community integration, and transitional care into adulthood. This study seeks to provide evaluation methods for characterizing the biomechanics of wheelchair use by children with spinal cord injury (SCI). Twelve subjects with SCI underwent motion analysis while they propelled their wheelchair at a self-selected speed and propulsion pattern. Upper extremity joint kinematics, forces, and moments were computed using inverse dynamics methods with our custom model. The glenohumeral joint displayed the largest average range of motion (ROM) at 47.1° in the sagittal plane and the largest average superiorly and anteriorly directed joint forces of 6.1% BW and 6.5% BW, respectively. The largest joint moments were 1.4% body weight times height (BW × H) of elbow flexion and 1.2% BW × H of glenohumeral joint extension. Pediatric manual wheelchair users demonstrating these high joint demands may be at risk for pain and upper limb injuries. These evaluation methods may be a useful tool for clinicians and therapists for pediatric wheelchair prescription and training.

  3. Perinatal program evaluations: methods, impacts, and future goals.

    PubMed

    Thomas, Suzanne D; Hudgins, Jodi L; Sutherland, Donald E; Ange, Brittany L; Mobley, Sandra C

    2015-07-01

    The objective of this methodology note is to examine perinatal program evaluation methods as they relate to the life course health development model (LCHD) and risk reduction for poor birth outcomes. We searched PubMed, CDC, ERIC, and a list from the Association of Maternal and Child Health Programs (AMCHP) to identify sources. We included reports from theory, methodology, program reports, and instruments, as well as reviews of Healthy Start Programs and home visiting. Because our review focused upon evaluation methods we did not include reports that described the Healthy Start Program. The LCHD model demonstrates the non-linear relationships among epigenetic factors and environmental interactions, intentionality or worldview within a values framework, health practices, and observed outcomes in a lifelong developmental health trajectory. The maternal epigenetic and social environment during fetal development sets the stage for the infant's lifelong developmental arc. The LCHD model provides a framework to study challenging maternal child health problems. Research that tracks the long term maternal-infant health developmental trajectory is facilitated by multiple, linked public record systems. Two instruments, the life skills progression instrument and the prenatal risk overview are theoretically consistent with the LCHD and can be adapted for local or population-based use. A figure is included to demonstrate a method of reducing interaction among variables by sample definition. Both in-place local programs and tests of best practices in community-based research are needed to reduce unacceptably high infant mortality. Studies that follow published reporting standards strengthen evidence.

  4. Systematic Comparative Evaluation of Methods for Investigating the TCRβ Repertoire

    PubMed Central

    Zhang, Ruifang; Du, Yuanping; Hong, Xueyu; Cao, Hongzhi; Su, Zheng; Wang, Changxi; Wu, Jinghua; Nie, Chao; Xu, Xun; Kristiansen, Karsten

    2016-01-01

    High-throughput sequencing has recently been applied to profile the high diversity of antibodyome/B cell receptors (BCRs) and T cell receptors (TCRs) among immune cells. To date, Multiplex PCR (MPCR) and 5’RACE are predominately used to enrich rearranged BCRs and TCRs. Both approaches have advantages and disadvantages; however, a systematic evaluation and direct comparison of them would benefit researchers in the selection of the most suitable method. In this study, we used both pooled control plasmids and spiked-in cells to benchmark the MPCR bias. RNA from three healthy donors was subsequently processed with the two methods to perform a comparative evaluation of the TCR β chain sequences. Both approaches demonstrated high reproducibility (R2 = 0.9958 and 0.9878, respectively). No differences in gene usage were identified for most V/J genes (>60%), and an average of 52.03% of the CDR3 amino acid sequences overlapped. MPCR exhibited a certain degree of bias, in which the usage of several genes deviated from 5’RACE, and some V-J pairings were lost. In contrast, there was a smaller rate of effective data from 5’RACE (11.25% less compared with MPCR). Nevertheless, the methodological variability was smaller compared with the biological variability. Through direct comparison, these findings provide novel insights into the two experimental methods, which will prove to be valuable in immune repertoire research and its interpretation. PMID:27019362

  5. Evaluation of two sterility testing methods for intravenous admixtures.

    PubMed

    Condella, F; Eichelberger, K; Foote, L C; Griffin, R E

    1980-06-01

    The Addi-Chek Quality Control System (Millipore Corporation) and Ivex-2 Filterset (Abbott Laboratories) were evaluated to determine their effectiveness, applicability, and cost as part of a pharmacy quality-control program. Each method was tested using 50 solutions, 25 of which had been contaminated by inoculation with one of five micro-organisms; the other 25 solutions were used as controls. Aseptic technique was used, and procedures were carried out in a laminar air flow hood. Contaminated solutions were blinded from the person performing the tests. Addi-Chek detected contamination in all the inoculated solutions and in three of the uninoculated solutions. The latter may have been a result of adventitious contamination during the testing procedure. Ivex-2 detected contamination in 24 of the 25 inoculated solutions; no other contamination was found. The effectiveness of the methods in detecting low-level microbial contamination appears comparable. Both methods have been shown to be useful in the pharmacy setting, but Ivex-2 could be used to test for contamination when used as an in-line filter at the patient level. Ivex-2 is less expensive and warrants further evaluation in monitoring for microbial contamination during preparation and administration of intravenous solutions.

  6. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, Franklin R.; Bosilovich, Michael; Lyon, Bradfield; Funk, Chris

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period

  7. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Roberts, J. Brent; Bosilovich, Michael; Lyon, Bradfield

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period.

  8. SediFoam: A general-purpose, open-source CFD-DEM solver for particle-laden flow with emphasis on sediment transport

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Xiao, Heng

    2016-04-01

    With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(107) particles.

  9. SveDem, the Swedish Dementia Registry – A Tool for Improving the Quality of Diagnostics, Treatment and Care of Dementia Patients in Clinical Practice

    PubMed Central

    Religa, Dorota; Fereshtehnejad, Seyed-Mohammad; Cermakova, Pavla; Edlund, Ann-Katrin; Garcia-Ptacek, Sara; Granqvist, Nicklas; Hallbäck, Anne; Kåwe, Kerstin; Farahmand, Bahman; Kilander, Lena; Mattsson, Ulla-Britt; Nägga, Katarina; Nordström, Peter; Wijk, Helle; Wimo, Anders; Winblad, Bengt; Eriksdotter, Maria

    2015-01-01

    Background The Swedish Dementia Registry (SveDem) was developed with the aim to improve the quality of diagnostic work-up, treatment and care of patients with dementia disorders in Sweden. Methods SveDem is an internet based quality registry where several indicators can be followed over time. It includes information about the diagnostic work-up, medical treatment and community support (www.svedem.se). The patients are diagnosed and followed-up yearly in specialist units, primary care centres or in nursing homes. Results The database was initiated in May 2007 and covers almost all of Sweden. There were 28 722 patients registered with a mean age of 79.3 years during 2007–2012. Each participating unit obtains continuous online statistics from its own registrations and they can be compared with regional and national data. A report from SveDem is published yearly to inform medical and care professionals as well as political and administrative decision-makers about the current quality of diagnostics, treatment and care of patients with dementia disorders in Sweden. Conclusion SveDem provides knowledge about current dementia care in Sweden and serves as a framework for ensuring the quality of diagnostics, treatment and care across the country. It also reflects changes in quality dementia care over time. Data from SveDem can be used to further develop the national guidelines for dementia and to generate new research hypotheses. PMID:25695768

  10. A method for evaluating horizontal well pumping tests.

    PubMed

    Langseth, David E; Smyth, Andrew H; May, James

    2004-01-01

    Predicting the future performance of horizontal wells under varying pumping conditions requires estimates of basic aquifer parameters, notably transmissivity and storativity. For vertical wells, there are well-established methods for estimating these parameters, typically based on either the recovery from induced head changes in a well or from the head response in observation wells to pumping in a test well. Comparable aquifer parameter estimation methods for horizontal wells have not been presented in the ground water literature. Formation parameter estimation methods based on measurements of pressure in horizontal wells have been presented in the petroleum industry literature, but these methods have limited applicability for ground water evaluation and are based on pressure measurements in only the horizontal well borehole, rather than in observation wells. This paper presents a simple and versatile method by which pumping test procedures developed for vertical wells can be applied to horizontal well pumping tests. The method presented here uses the principle of superposition to represent the horizontal well as a series of partially penetrating vertical wells. This concept is used to estimate a distance from an observation well at which a vertical well that has the same total pumping rate as the horizontal well will produce the same drawdown as the horizontal well. This equivalent distance may then be associated with an observation well for use in pumping test algorithms and type curves developed for vertical wells. The method is shown to produce good results for confined aquifers and unconfined aquifers in the absence of delayed yield response. For unconfined aquifers, the presence of delayed yield response increases the method error.

  11. Formaldehyde: a comparative evaluation of four monitoring methods

    SciTech Connect

    Coyne, L.B.; Cook, R.E.; Mann, J.R.; Bouyoucos, S.; McDonald, O.F.; Baldwin, C.L.

    1985-10-01

    The performances of four formaldehyde monitoring devices were compared in a series of laboratory and field experiments. The devices evaluated included the DuPont C-60 formaldehyde badge, the SKC impregnated charcoal tube, an impinger/polarographic method and the MDA Lion formaldemeter. The major evaluation parameters included: concentration range, effects of humidity, sample storage, air velocity, accuracy, precision, interferences from methanol, styrene, 1,3-butadiene, sulfur dioxide and dimethylamine. Based on favorable performances in the laboratory and field, each device was useful for monitoring formaldehyde in the industrial work environment; however, these devices were not evaluated for residential exposure assessment. The impinger/polarographic method had a sensitivity of 0.06 ppm, based on a 20-liter air sample volume, and accurately determined the short-term excursion limit (STEL). It was useful for area monitoring but was not very practical for time-weighted average (TWA) personal monitoring measurements. The DuPont badge had a sensitivity of 2.8 ppm-hr and accurately and simply determined TWA exposures. It was not sensitive enough to measure STEL exposures, however, and positive interferences resulted if 1,3-butadiene was present. The SKC impregnated charcoal tube measured both TWA and STEL concentrations and had a sensitivity of 0.06 ppm based on a 25-liter air sample volume. Lightweight and simple to use, the MDA Lion formaldemeter had a sensitivity of 0.2 ppm. It had the advantage of giving an instantaneous reading in the field; however, it must be used with caution because it responded to many interferences. The method of choice depended on the type of sampling required, field conditions encountered during sampling and an understanding of the limitations of each monitoring device.

  12. An Overview of the CapDEM Integrated Engineering Environment

    DTIC Science & Technology

    2005-07-01

    cours d’un exercice d’élaboration et d’expérimentation de concepts (EEC) du DIGCap. On utilisera lors de cet exercice d’EEC les mêmes outils pour...utilize the existing IEE. First, the IEE will be used to provide engineering data management for the CapDEM Concept Development and Experimentation (CD...E) exercise. This CD&E exercise will employ the same set of tools to demonstrate the application of capability engineering concepts to support CD&E

  13. Evaluation of pozzolanic activity by the electric resistance measurement method

    SciTech Connect

    Tashiro, Chuichi; Ikeda, Ko . Dept. of Advanced Materials Science and Engineering); Inoue, Yoshihiro )

    1994-01-01

    Measurements of electric resistance and amount of consumption of portlandite were carried out in accelerated curing conditions by preparing pastes of Fine Ceraments, fly ash, silica fume, kaolin, acid clay, zeolite and quartz activated with portlandite. Electric resistances of reactive pozzolans showed sharp rises except that of kaolin, whereas that of inactive material, quartz, showed no sharp rise. Electric resistances are proportional to the consumptions of portlandite except for fly ashes. The electric resistance measurement method combined with portlandite consumption measurement is useful to the rapid evaluation of pozzolanic activity.

  14. Evaluation of resin composite translucency by two different methods.

    PubMed

    Kim, D-H; Park, S-H

    2013-01-01

    The purpose of this study was 1) to compare the translucency of seven different types of composite materials and three different shade categories (dentin, enamel, and translucent) by determining the translucency parameter (TP) and light transmittance (%T) and 2) to evaluate the correlation between the results of the two evaluation methods. Three shades (dentin A3, enamel A3, and clear translucent) of seven composite materials (Beautifil II [BF], Denfil [DF], Empress Direct [ED], Estelite Sigma Quick [ES], Gradia Direct [GD], Premise [PR], and Tetric N-Ceram [TC]) from different manufacturers were screened in this study. Ten disk-shaped specimens (10 mm in diameter and 1 mm in thickness) were prepared for each material. For the TP measurements, the colors of each specimen were recorded according to the CIELAB color scale against white and black backgrounds with a colorimeter and used to calculate the TP value. For the %T measurements, the mean direct transmittance through the specimen in the range between 380 and 780 nm was recorded using a spectrometer and computer software. Two-way analysis of variance (ANOVA) tests were performed to compare the TP and %T for the composite materials and shade categories. One-way ANOVA and Tukey tests were used for the seven composite materials per shade category and the three shade categories per composite material. The correlation between the two evaluation methods was determined using the Pearson correlation coefficient. All statistical procedures were performed within a 95% confidence level. TP differed significantly by composite material within each shade category (p<0.05) and by shade category within each composite material (p<0.05). %T differed significantly by composite material within each shade category (p<0.05) and by shade categories within each composite material (p<0.05), except for BF and ES. For the two evaluation methods, TP and %T, were positively correlated (r=0.626, p<0.05). These methods showed strong

  15. Further evaluation of the constrained least squares electromagnetic compensation method

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1991-01-01

    Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.

  16. Evaluation of Uranium Measurements in Water by Various Methods - 13571

    SciTech Connect

    Tucker, Brian J.; Workman, Stephen M.

    2013-07-01

    In December 2000, EPA amended its drinking water regulations for radionuclides by adding a Maximum Contaminant Level (MCL) for uranium (so called MCL Rule)[1] of 30 micrograms per liter (μg/L). The MCL Rule also included MCL goals of zero for uranium and other radionuclides. Many radioactively contaminated sites must test uranium in wastewater and groundwater to comply with the MCL rule as well as local publicly owned treatment works discharge limitations. This paper addresses the relative sensitivity, accuracy, precision, cost and comparability of two EPA-approved methods for detection of total uranium: inductively plasma/mass spectrometry (ICP-MS) and alpha spectrometry. Both methods are capable of measuring the individual uranium isotopes U-234, U- 235, and U-238 and both methods have been deemed acceptable by EPA. However, the U-238 is by far the primary contributor to the mass-based ICP-MS measurement, especially for naturally-occurring uranium, which contains 99.2745% U-238. An evaluation shall be performed relative to the regulatory requirement promulgated by EPA in December 2000. Data will be garnered from various client sample results measured by ALS Laboratory in Fort Collins, CO. Data shall include method detection limits (MDL), minimum detectable activities (MDA), means and trends in laboratory control sample results, performance evaluation data for all methods, and replicate results. In addition, a comparison will be made of sample analyses results obtained from both alpha spectrometry and the screening method Kinetic Phosphorescence Analysis (KPA) performed at the U.S. Army Corps of Engineers (USACE) FUSRAP Maywood Laboratory (UFML). Many uranium measurements occur in laboratories that only perform radiological analysis. This work is important because it shows that uranium can be measured in radiological as well as stable chemistry laboratories and it provides several criteria as a basis for comparison of two uranium test methods. This data will

  17. Open-source MFIX-DEM software for gas-solids flows: Part II Validation studies

    SciTech Connect

    Li, Tingwen; Garg, Rahul; Galvin, Janine; Pannala, Sreekanth

    2012-01-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  18. Open-Source MFIX-DEM Software for Gas-Solids Flows: Part II - Validation Studies

    SciTech Connect

    Li, Tingwen

    2012-04-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas–solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  19. Region-growing segmentation to automatically delimit synthetic drumlins in 'real' DEMs

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Smith, Mike; Hillier, John

    2013-04-01

    Mapping or 'delimiting' landforms is one of geomorphology's primary tools. Computer-based techniques, such as terrain segmentation, may potentially provide terrain units that are close to the size and shape of landforms. Whether terrain units represent landforms heavily depends on the segmentation algorithm, its settings and the type of underlying land-surface parameters (LSPs). We assess a widely used region-growing technique, i.e. the multiresolution segmentation (MRS) algorithm as implemented in object-based image analysis software, for delimiting drumlins. Supervised testing was based on five synthetic DEMs that included the same set of perfectly known drumlins at different locations. This, for the first time, removes subjectivity from the reference data. Five LSPs were tested, and four variants were computed for each using two pre- and post-processing options. The automated method (1) employs MRS to partition the input LSP into 200 ever coarser terrain unit patterns, (2) identifies the spatially best matching terrain unit for each reference drumlin, and (3) computes four accuracy metrics for quantifying the aerial match between delimited and reference drumlins. MRS performed best on LSPs that are regional, derived from a decluttered DEM and then normalized. Median scale parameters (SPs) for segments best delineating drumlins were relatively stable for the same LSP, but varied significantly between LSPs. Larger drumlins were generally delimited at higher SPs. MRS indicated high robustness against variations in the location and distribution of drumlins.

  20. Evolution of methods for evaluating the occurrence of floods

    USGS Publications Warehouse

    Benson, M.A.

    1962-01-01

    A brief summary is given of the history of methods of expressing flood potentialities, proceeding from simple flood formulas to statistical methods of flood-frequency analysis on a regional basis. Current techniques are described and evaluated. Long-term flood records in the United States show no justification for the adoption of a single type of theoretical distribution of floods. The significance and predictive values of flood-frequency relations are considered. Because of the length of flood records available and the interdependence of flood events within a region, the probable long-term average magnitudes of floods of a given recurrence interval are uncertain. However, if the magnitudes defined by the records available are accepted, the relative effects of drainage-basin characteristics and climatic variables can be determined with a reasonable degree of assurance.

  1. Non-destructive evaluation method employing dielectric electrostatic ultrasonic transducers

    NASA Technical Reports Server (NTRS)

    Yost, William T. (Inventor); Cantrell, Jr., John H. (Inventor)

    2003-01-01

    An acoustic nonlinearity parameter (.beta.) measurement method and system for Non-Destructive Evaluation (NDE) of materials and structural members novelly employs a loosely mounted dielectric electrostatic ultrasonic transducer (DEUT) to receive and convert ultrasonic energy into an electrical signal which can be analyzed to determine the .beta. of the test material. The dielectric material is ferroelectric with a high dielectric constant .di-elect cons.. A computer-controlled measurement system coupled to the DEUT contains an excitation signal generator section and a measurement and analysis section. As a result, the DEUT measures the absolute particle displacement amplitudes in test material, leading to derivation of the nonlinearity parameter (.beta.) without the costly, low field reliability methods of the prior art.

  2. Infrared non-destructive evaluation method and apparatus

    DOEpatents

    Baleine, Erwan; Erwan, James F; Lee, Ching-Pang; Stinelli, Stephanie

    2014-10-21

    A method of nondestructive evaluation and related system. The method includes arranging a test piece (14) having an internal passage (18) and an external surface (15) and a thermal calibrator (12) within a field of view (42) of an infrared sensor (44); generating a flow (16) of fluid characterized by a fluid temperature; exposing the test piece internal passage (18) and the thermal calibrator (12) to fluid from the flow (16); capturing infrared emission information of the test piece external surface (15) and of the thermal calibrator (12) simultaneously using the infrared sensor (44), wherein the test piece infrared emission information includes emission intensity information, and wherein the thermal calibrator infrared emission information includes a reference emission intensity associated with the fluid temperature; and normalizing the test piece emission intensity information against the reference emission intensity.

  3. Method for evaluating kinetic stability of petroleum disperse systems

    SciTech Connect

    Marushkin, A.B.; Kurochkin, A.K.; Gimaev, R.N.

    1988-01-01

    A method was developed for varying the ratio of paraffinic/naphthenic and aromatic hydrocarbons in petroleum disperse systems by introducing a model dispersion medium and for noting the dynamics of change of petroleum disperse system composition in a triangular diagram. Data for the construction of the system phase equilibrium curves were derived and the phase equilibrium of atmospheric resids, cracked tars, and pyrolysis tars was evaluated. Disruption of phase equilibrium upon introducing a precipitant led to coagulation of the asphaltenes and their separation from the deasphalted product phase in precipitate form. The method can be used in determining the quantity of precipitant required to achieve the necessary depth of deasphalting or to determine deasphalting depth with a given precipitant consumption.

  4. Effects of lidar point density on bare earth extraction and DEM creation

    NASA Astrophysics Data System (ADS)

    Puetz, Angela M.; Olsen, R. Chris; Anderson, Brian

    2009-05-01

    Data density has a crucial impact on the accuracy of Digital Elevation Models (DEMs). In this study, DEMs were created from a high point-density LIDAR dataset using the bare earth extraction module in Quick Terrain Modeler. Lower point-density LIDAR collects were simulated by randomly selecting points from the original dataset at a series of decreasing percentages. The DEMs created from the lower resolution datasets are compared to the original DEM. Results show a decrease in DEM accuracy as the resolution of the LIDAR dataset is reduced. Some analysis is made of the types of errors encountered in the lower resolution DEMs. It is also noted that the percentage of points classified as bare earth decreases as the resolution of the LIDAR dataset is reduced.

  5. Malaria and Economic Evaluation Methods: Challenges and Opportunities.

    PubMed

    Drake, Tom L; Lubell, Yoel

    2017-01-19

    There is a growing evidence base on the cost effectiveness of malaria interventions. However, certain characteristics of malaria decision problems present a challenge to the application of healthcare economic evaluation methods. This paper identifies five such challenges. The complexities of (i) declining incidence and cost effectiveness in the context of an elimination campaign; (ii) international aid and its effect on resource constraints; and (iii) supranational priority setting, all affect how health economists might use a cost-effectiveness threshold. Consensus and guidance on how to determine and interpret cost-effectiveness thresholds in the context of internationally financed elimination campaigns is greatly needed. (iv) Malaria interventions are often complimentary and evaluations may need to construct intervention bundles to represent relevant policy positions as sets of mutually exclusive alternatives. (v) Geographic targeting is a key aspect of malaria policy making that is only beginning to be addressed in economic evaluations. An approach to budget-based geographic resource allocation is described in an accompanying paper in this issue and addresses some of these methodological challenges.

  6. Full-waveform and discrete-return lidar in salt marsh environments: An assessment of biophysical parameters, vertical uncertatinty, and nonparametric dem correction

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.

    High-resolution and high-accuracy elevation data sets of coastal salt marsh environments are necessary to support restoration and other management initiatives, such as adaptation to sea level rise. Lidar (light detection and ranging) data may serve this need by enabling efficient acquisition of detailed elevation data from an airborne platform. However, previous research has revealed that lidar data tend to have lower vertical accuracy (i.e., greater uncertainty) in salt marshes than in other environments. The increase in vertical uncertainty in lidar data of salt marshes can be attributed primarily to low, dense-growing salt marsh vegetation. Unfortunately, this increased vertical uncertainty often renders lidar-derived digital elevation models (DEM) ineffective for analysis of topographic features controlling tidal inundation frequency and ecology. This study aims to address these challenges by providing a detailed assessment of the factors influencing lidar-derived elevation uncertainty in marshes. The information gained from this assessment is then used to: 1) test the ability to predict marsh vegetation biophysical parameters from lidar-derived metrics, and 2) develop a method for improving salt marsh DEM accuracy. Discrete-return and full-waveform lidar, along with RTK GNSS (Real-time Kinematic Global Navigation Satellite System) reference data, were acquired for four salt marsh systems characterized by four major taxa (Spartina alterniflora, Spartina patens, Distichlis spicata, and Salicornia spp.) on Cape Cod, Massachusetts. These data were used to: 1) develop an innovative combination of full-waveform lidar and field methods to assess the vertical distribution of aboveground biomass as well as its light blocking properties; 2) investigate lidar elevation bias and standard deviation using varying interpolation and filtering methods; 3) evaluate the effects of seasonality (temporal differences between peak growth and senescent conditions) using lidar data

  7. Total energy evaluation in the Strutinsky shell correction method.

    PubMed

    Zhou, Baojing; Wang, Yan Alexander

    2007-08-14

    We analyze the total energy evaluation in the Strutinsky shell correction method (SCM) of Ullmo et al. [Phys. Rev. B 63, 125339 (2001)], where a series expansion of the total energy is developed based on perturbation theory. In agreement with Yannouleas and Landman [Phys. Rev. B 48, 8376 (1993)], we also identify the first-order SCM result to be the Harris functional [Phys. Rev. B 31, 1770 (1985)]. Further, we find that the second-order correction of the SCM turns out to be the second-order error of the Harris functional, which involves the a priori unknown exact Kohn-Sham (KS) density, rho(KS)(r). Interestingly, the approximation of rho(KS)(r) by rho(out)(r), the output density of the SCM calculation, in the evaluation of the second-order correction leads to the Hohenberg-Kohn-Sham functional. By invoking an auxiliary system in the framework of orbital-free density functional theory, Ullmo et al. designed a scheme to approximate rho(KS)(r), but with several drawbacks. An alternative is designed to utilize the optimal density from a high-quality density mixing method to approximate rho(KS)(r). Our new scheme allows more accurate and complex kinetic energy density functionals and nonlocal pseudopotentials to be employed in the SCM. The efficiency of our new scheme is demonstrated in atomistic calculations on the cubic diamond Si and face-centered-cubic Ag systems.

  8. Single well tracer method to evaluate enhanced recovery

    DOEpatents

    Sheely, Jr., Clyde Q.; Baldwin, Jr., David E.

    1978-01-01

    Data useful to evaluate the effectiveness of or to design an enhanced recovery process (the recovery process involving mobilizing and moving hydrocarbons through a hydrocarbon-bearing subterranean formation from an injection well to a production well by injecting a mobilizing fluid into the injection well) are obtained by a process which comprises sequentially: determining hydrocarbon saturation in the formation in a volume in the formation near a well bore penetrating the formation, injecting sufficient of the mobilizing fluid to mobilize and move hydrocarbons from a volume in the formation near the well bore penetrating the formation, and determining by the single well tracer method a hydrocarbon saturation profile in a volume from which hydrocarbons are moved. The single well tracer method employed is disclosed by U.S. Pat. No. 3,623,842. The process is useful to evaluate surfactant floods, water floods, polymer floods, CO.sub.2 floods, caustic floods, micellar floods, and the like in the reservoir in much less time at greatly reduced costs, compared to conventional multi-well pilot test.

  9. Evaluation method for determining management priorities for special case waste

    SciTech Connect

    Kudera, D.E.; Wickland, C.E.

    1990-08-01

    The U.S. Department of Energy (DOE) Radioactive Waste Technical Support Program (TSP) began the Special Case Waste (SCW) Inventory and Characterization Project in April 1989. The collection of data has been completed and a final draft report, Department of Energy Special Case Radioactive Waste Inventory and Characterization Data Report (DOE/LLW-96), was submitted in May 1990. A second final draft report, Supplemental Data Report to the Department of Energy Special Case Radioactive Waste Inventory and Characterization Data Report (DOE/LLW-95), containing additional and more detailed data and graphical presentations, was completed in July 1990. These two reports contain details on the special case waste categories and summaries of the total volumes and curies associated with each category of waste. It is anticipated that some version or combination of these two reports will be included in the final version of this report, which will describe an evaluation method for determining management priorities for special case waste. Preliminary analysis of the inventory data indicates that approximately 1,000,000 m{sup 3} of special case waste exist in the DOE system with possible insufficient treatment/storage/disposal capability or capacity. To help DOE prioritize the actions required to manage this large volume of special case waste, an evaluation method is required.

  10. Evaluation of Anomaly Detection Method Based on Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Fontugne, Romain; Himura, Yosuke; Fukuda, Kensuke

    The number of threats on the Internet is rapidly increasing, and anomaly detection has become of increasing importance. High-speed backbone traffic is particularly degraded, but their analysis is a complicated task due to the amount of data, the lack of payload data, the asymmetric routing and the use of sampling techniques. Most anomaly detection schemes focus on the statistical properties of network traffic and highlight anomalous traffic through their singularities. In this paper, we concentrate on unusual traffic distributions, which are easily identifiable in temporal-spatial space (e.g., time/address or port). We present an anomaly detection method that uses a pattern recognition technique to identify anomalies in pictures representing traffic. The main advantage of this method is its ability to detect attacks involving mice flows. We evaluate the parameter set and the effectiveness of this approach by analyzing six years of Internet traffic collected from a trans-Pacific link. We show several examples of detected anomalies and compare our results with those of two other methods. The comparison indicates that the only anomalies detected by the pattern-recognition-based method are mainly malicious traffic with a few packets.

  11. Quality Evaluation of Pork with Various Freezing and Thawing Methods.

    PubMed

    Ku, Su Kyung; Jeong, Ji Yun; Park, Jong Dae; Jeon, Ki Hong; Kim, Eun Mi; Kim, Young Boong

    2014-01-01

    In this study, the physicochemical and sensory quality characteristics due to the influence of various thawing methods on electro-magnetic and air blast frozen pork were examined. The packaged pork samples, which were frozen by air blast freezing at -45℃ or electro-magnetic freezing at -55℃, were thawed using 4 different methods: refrigeration (4±1℃), room temperature (RT, 25℃), cold water (15℃), and microwave (2450 MHz). Analyses were carried out to determine the drip and cooking loss, water holding capacity (WHC), moisture content and sensory evaluation. Frozen pork thawed in a microwave indicated relatively less thawing loss (0.63-1.24%) than the other thawing methods (0.68-1.38%). The cooking loss after electro-magnetic freezing indicated 37.4% by microwave thawing, compared with 32.9% by refrigeration, 36.5% by RT, and 37.2% by cold water in ham. The thawing of samples frozen by electro-magnetic freezing showed no significant differences between the methods used, while the moisture content was higher in belly thawed by microwave (62.0%) after electro-magnetic freezing than refrigeration (54.8%), RT (61.3%), and cold water (61.1%). The highest overall acceptability was shown for microwave thawing after electro-magnetic freezing but there were no significant differences compared to that of the other samples.

  12. Evaluation of mercury speciation by EPA (Draft) Method 29

    SciTech Connect

    Laudal, D.L.; Heidt, M.K.; Nott, B.

    1995-11-01

    The 1990 Clean Air Act Amendments require that the U.S. Environmental protection Agency (EPA) assess the health risks associated with mercury emissions. Also, the law requires a separate assessment of health risks posed by the emission of 189 tract chemicals (including mercury) for electric utility steam-generating units. In order to conduct a meaningful assessment of health and environmental effects, we must have, among other things, a reliable and accurate method to measure mercury emissions. In addition, the rate of mercury deposition and the type of control strategies used may depend upon the type of mercury emitted (i.e., whether it is in the oxidized or elemental form). It has been speculated that EPA (Draft) Method 29 can speciate mercury by selective absorption; however, this claim has yet to be proven. The Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE) have contracted with the Energy & Environmental Research Center (EERC) at University of North Dakota to evaluate EPA (Draft) Method 29 at the pilot-scale level. The objective of the work is to determine whether EPA (Draft) Method 29 can reliably quantify and speciate mercury in the flue gas from coal-fired boilers.

  13. Evaluation of a photographic method to measure dental angulation

    PubMed Central

    Amorim, Jordana Rodrigues; Macedo, Diogo de Vasconcelos; Normando, David

    2014-01-01

    Objective To analyze the reliability and reproducibility of a simplified method for analysis of dental angulation using digital photos of plaster dental casts. Methods Digital and standardized photographs of plaster casts were performed and posteriorly imported to an angle reading graphic program in order to have measurements obtained. Such procedures were repeated to evaluate the random error and to analyze reproducibility through intraclass correlation. The sample consisted of 12 individuals (six male and six female) with full permanent dentition orthodontically untreated. The analyses were bilaterally carried out, and generated 24 measurements. Results The random error showed variation of 0.77 to 2.55 degrees for teeth angulation. The statistical analysis revealed that the method presents excellent reproducibility (p < 0.0001) for all teeth, except for the upper premolars. In spite of that, it is still considered statistically significant (p < 0.001). Conclusion The proposed method presents enough reliability that justifies its use in the development of scientific research as well as in clinical practice. PMID:24945518

  14. Advances in nondestructive evaluation methods for inspection of refractory concretes

    SciTech Connect

    Ellingson, W. A.

    1980-01-01

    Refractory concrete linings are essential to protect steel pressure boundaries from high-temperature agressive erosive/corrosive environments. Castable refractory concretes have been gaining more acceptance as information about their performance increases. Economic factors, however, have begun to impose high demands on the reliability of refractory materials. Advanced nondestructive evaluation methods are being developed to assist the refractory user. Radiographic techniques, thermography, acoustic-emission detection, and interferometry have been shown to yield information on the structural status of refractory concrete. Methods using /sup 60/Co radiation sources are capable of yielding measurements of refractory wear rate as well as images of cracks and/or voids in pre- and post-fired refractory linings up to 60 cm thick. Thermographic (infrared) images serve as a qualitative indicator of refractory spalling, but quantitative measurements are difficult to obtain from surface-temperature mapping. Acoustic emission has been shown to be a qualitative indicator of thermomechanical degradation of thick panels of 50 and 95% Al/sub 2/O/sub 3/ during initial heating and cooling at rates of 100 to 220/sup 0/C/h. Laser interferometry methods have been shown to be capable of complete mappings of refractory lining thicknesses. This paper will present results obtained from laboratory and field applications of these methods in petrochemical, steel, and coal-conversion plants.

  15. Evaluating survey instruments and methods in a steep channel

    NASA Astrophysics Data System (ADS)

    Scott, Daniel N.; Brogan, Daniel J.; Lininger, Katherine B.; Schook, Derek M.; Daugherty, Ellen E.; Sparacino, Matthew S.; Patton, Annette I.

    2016-11-01

    Methods for surveying and analyzing channel bed topography commonly lack a rigorous characterization of their appropriateness for project objectives. We compare four survey methods: a hand level, two different methods of surveying with a laser rangefinder, and a real-time kinematic GNSS (RTK-GNSS) to explore their accuracy in determining channel bed slope and roughness for a study reach in a small, dry, steep channel. Additionally, we evaluate the variability among four operators for each survey technique. Two methods of calculating reach slope were computed: a regression on the channel profile and a calculation using only survey endpoints. Using data from the RTK-GNSS as our accuracy reference, the hand level and two-person laser rangefinder surveying systems performed with high accuracy (< 5% error in estimating slope, < 10% error in estimating roughness), while the one-person laser rangefinder survey system performed with considerably lower accuracy (up to 54% error in roughness and slope). Variability between operators was found to be very low (coefficients of variation ranged from 0.001 to 0.046) for all survey systems except the one-person laser rangefinder system, suggesting that survey data collected by different operators can be validly compared. Due to reach-scale concavity, calculating slope using a regression produced significantly different values than those obtained by using only survey endpoints, suggesting that caution must be taken in choosing the most appropriate method of calculating slope for a given project objective. We present recommendations for choosing appropriate survey and analysis methods to accomplish various surveying objectives.

  16. Precise Determination of the Baseline Between the TerraSAR-X and TanDEM-X Satellites

    NASA Astrophysics Data System (ADS)

    Koenig, Rolf; Rothacher, Markus; Michalak, Grzegorz; Moon, Yongjin

    TerraSAR-X, launched on June 15, 2007, and TanDEM-X, to be launched in September 2009, both carry the Tracking, Occultation and Ranging (TOR) category A payload instrument package. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), for precise orbit determination and atmospheric sounding and a Laser retro-reflector (LRR) serving as target for the global Satellite Laser Ranging (SLR) ground station network. The TOR is supplied by the GeoForschungsZentrum Potsdam (GFZ) Germany, and the Center for Space Research (CSR), Austin, Texas. The objective of the German/US collaboration is twofold: provision of atmospheric profiles for use in numerical weather predictions and climate studies from the occultation data and precision SAR data processing based on precise orbits and atmospheric products. For the scientific objectives of the TanDEM- X mission, i.e., bi-static SAR together with TerraSAR-X, the dual-frequency GPS receiver is of vital importance for the millimeter level determination of the baseline or distance between the two spacecrafts. The paper discusses the feasibility of generating millimeter baselines by the example of GRACE, where for validation the distance between the two GRACE satellites is directly available from the micrometer-level intersatellite link measurements. The distance of the GRACE satellites is some 200 km, the distance of the TerraSAR-X/TanDEM-X formation will be some 200 meters. Therefore the proposed approach is then subject to a simulation of the foreseen TerraSAR-X/TanDEM-X formation. The effect of varying space environmental conditions, of possible phase center variations, multi path, and of varying center of mass of the spacecrafts are evaluated and discussed.

  17. Development of nondestructive evaluation methods for structural ceramics

    SciTech Connect

    Ellingson, W.A.; Koehl, R.D.; Stuckey, J.B.; Sun, J.G.; Engel, H.P.; Smith, R.G.

    1997-06-01

    Development of nondestructive evaluation (NDE) methods for application to fossil energy systems continues in three areas: (a) mapping axial and radial density gradients in hot gas filters, (b) characterization of the quality of continuous fiber ceramic matrix composite (CFCC) joints and (c) characterization and detection of defects in thermal barrier coatings. In this work, X-ray computed tomographic imaging was further developed and used to map variations in the axial and radial density of two full length (2.3-m) hot gas filters. The two filters differed in through wall density because of the thickness of the coating on the continuous fibers. Differences in axial and through wall density were clearly detected. Through transmission infrared imaging with a highly sensitivity focal plane array camera was used to assess joint quality in two sets of SiC/SiC CFCC joints. High frame rate data capture suggests that the infrared imaging method holds potential for the characterization of CFCC joints. Work to develop NDE methods that can be used to evaluate electron beam physical vapor deposited coatings with platinum-aluminide (Pt-Al) bonds was undertaken. Coatings of Zirconia with thicknesses of 125 {micro}m (0.005 in.), 190 {micro}m (0.0075 in.), and 254 {micro}m (0.010 in.) with a Pt-Al bond coat on Rene N5 Ni-based superalloy were studied by infrared imaging. Currently, it appears that thickness variation, as well as thermal properties, can be assessed by infrared technology.

  18. Numerical Weather Predictions Evaluation Using Spatial Verification Methods

    NASA Astrophysics Data System (ADS)

    Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain-­-Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is co­financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-­-2013).

  19. Methods for evaluating cervical range of motion in trauma settings

    PubMed Central

    2012-01-01

    Immobilisation of the cervical spine is a common procedure following traumatic injury. This is often precautionary as the actual incidence of spinal injury is low. Nonetheless, stabilisation of the head and neck is an important part of pre-hospital care due to the catastrophic damage that may follow if further unrestricted movement occurs in the presence of an unstable spinal injury. Currently available collars are limited by the potential for inadequate immobilisation and complications caused by pressure on the patient’s skin, restricted airway access and compression of the jugular vein. Alternative approaches to cervical spine immobilisation are being considered, and the investigation of these new methods requires a standardised approach to the evaluation of neck movement. This review summarises the research methods and scientific technology that have been used to assess and measure cervical range of motion, and which are likely to underpin future research in this field. A systematic search of international literature was conducted to evaluate the methodologies used to assess the extremes of movement that can be achieved in six domains. 34 papers were included in the review. These studies used a range of methodologies, but study quality was generally low. Laboratory investigations and biomechanical studies have gradually given way to methods that more accurately reflect the real-life situations in which cervical spine immobilisation occurs. Latterly, new approaches using virtual reality and simulation have been developed. Coupled with modern electromagnetic tracking technology this has considerable potential for effective application in future research. However, use of these technologies in real life settings can be problematic and more research is needed. PMID:22856507

  20. An evaluation of teaching methods in the introductory physics classroom

    NASA Astrophysics Data System (ADS)

    Savage, Lauren Michelle Williams

    The introductory physics mechanics course at the University of North Carolina at Charlotte has a history of relatively high DFW rates. In 2011, the course was redesigned from the traditional lecture format to the inverted classroom format (flipped). This format inverts the classroom by introducing material in a video assigned as homework while the instructor conducts problem solving activities and guides discussions during the regular meetings. This format focuses on student-centered learning and is more interactive and engaging. To evaluate the effectiveness of the new method, final exam data over the past 10 years was mined and the pass rates examined. A normalization condition was developed to evaluate semesters equally. The two teaching methods were compared using a grade distribution across multiple semesters. Students in the inverted class outperformed those in the traditional class: "A"s increased by 22% and "B"s increased by 38%. The final exam pass rate increased by 12% under the inverted classroom approach. The same analysis was used to compare the written and online final exam formats. Surprisingly, no students scored "A"s on the online final. However, the percent of "B"s increased by 136%. Combining documented best practices from a literature review with personal observations of student performance and attitudes from first hand classroom experience as a teaching assistant in both teaching methods, reasons are given to support the continued use of the inverted classroom approach as well as the online final. Finally, specific recommendations are given to improve the course structure where weaknesses have been identified.

  1. Methods for evaluating cervical range of motion in trauma settings.

    PubMed

    Voss, Sarah; Page, Michael; Benger, Jonathan

    2012-08-02

    Immobilisation of the cervical spine is a common procedure following traumatic injury. This is often precautionary as the actual incidence of spinal injury is low. Nonetheless, stabilisation of the head and neck is an important part of pre-hospital care due to the catastrophic damage that may follow if further unrestricted movement occurs in the presence of an unstable spinal injury. Currently available collars are limited by the potential for inadequate immobilisation and complications caused by pressure on the patient's skin, restricted airway access and compression of the jugular vein. Alternative approaches to cervical spine immobilisation are being considered, and the investigation of these new methods requires a standardised approach to the evaluation of neck movement. This review summarises the research methods and scientific technology that have been used to assess and measure cervical range of motion, and which are likely to underpin future research in this field. A systematic search of international literature was conducted to evaluate the methodologies used to assess the extremes of movement that can be achieved in six domains. 34 papers were included in the review. These studies used a range of methodologies, but study quality was generally low. Laboratory investigations and biomechanical studies have gradually given way to methods that more accurately reflect the real-life situations in which cervical spine immobilisation occurs. Latterly, new approaches using virtual reality and simulation have been developed. Coupled with modern electromagnetic tracking technology this has considerable potential for effective application in future research. However, use of these technologies in real life settings can be problematic and more research is needed.

  2. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  3. A seamless, high-resolution digital elevation model (DEM) of the north-central California coast

    USGS Publications Warehouse

    Foxgrover, Amy C.; Barnard, Patrick L.

    2012-01-01

    A seamless, 2-meter resolution digital elevation model (DEM) of the north-central California coast has been created from the most recent high-resolution bathymetric and topographic datasets available. The DEM extends approximately 150 kilometers along the California coastline, from Half Moon Bay north to Bodega Head. Coverage extends inland to an elevation of +20 meters and offshore to at least the 3 nautical mile limit of state waters. This report describes the procedures of DEM construction, details the input data sources, and provides the DEM for download in both ESRI Arc ASCII and GeoTIFF file formats with accompanying metadata.

  4. DEM-based Approaches for the Identification of Flood Prone Areas

    NASA Astrophysics Data System (ADS)

    Samela, Caterina; Manfreda, Salvatore; Nardi, Fernando; Grimaldi, Salvatore; Roth, Giorgio; Sole, Aurelia

    2013-04-01

    The remarkable number of inundations that caused, in the last decades, thousands of deaths and huge economic losses, testifies the extreme vulnerability of many Countries to the flood hazard. As a matter of fact, human activities are often developed in the floodplains, creating conditions of extremely high risk. Terrain morphology plays an important role in understanding, modelling and analyzing the hydraulic behaviour of flood waves. Research during the last 10 years has shown that the delineation of flood prone areas can be carried out using fast methods that relay on basin geomorphologic features. In fact, the availability of new technologies to measure surface elevation (e.g., GPS, SAR, SAR interferometry, RADAR and LASER altimetry) has given a strong impulse to the development of Digital Elevation Models (DEMs) based approaches. The identification of the dominant topographic controls on the flood inundation process is a critical research question that we try to tackle with a comparative analysis of several techniques. We reviewed four different approaches for the morphological characterization of a river basin with the aim to provide a description of their performances and to identify their range of applicability. In particular, we explored the potential of the following tools. 1) The hydrogeomorphic method proposed by Nardi et al. (2006) which defines the flood prone areas according to the water level in the river network through the hydrogeomorphic theory. 2) The linear binary classifier proposed by Degiorgis et al. (2012) which allows distinguishing flood-prone areas using two features related to the location of the site under exam with respect to the nearest hazard source. The two features, proposed in the study, are the length of the path that hydrologically connects the location under exam to the nearest element of the drainage network and the difference in elevation between the cell under exam and the final point of the same path. 3) The method by

  5. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  6. Dredged Material Evaluations: Review of Zooplankton Toxicity Test Methods for Marine Water Quality Evaluations

    DTIC Science & Technology

    2016-09-01

    Toxicity Test Methods for Marine Water Quality Evaluations by Alan J Kennedy, Guilherme Lotufo, Jennifer G. Laird, and J. Daniel Farrar PURPOSE: The...first objective of this Dredging Operations and Engineering (DOER) technical note is to summarize currently available estuarine and marine water ...suitable for unrestricted open water placement, beneficial use, or if management strategies are necessary. Open water placement of DM into inland

  7. Using TanDEM data for forest height estimation and change detection

    NASA Astrophysics Data System (ADS)

    Thiele, Antje; Dubois, Clémence; Boldt, Markus; Hinz, Stefan

    2016-10-01

    Mapping of forest coverage and forest changes became an increasing issue due to deforestation and forest degradation. Moreover, the estimation of related indicators such as carbon reduction, biomass and wood capacity is of large interest for industry and politics. As forest height is an important contributing parameter for these indicators, the region-wide estimation of forest heights is an essential step. This article investigates the accuracy potential of forest height estimation that can be reached by the current configuration of the two SAR satellites TerraSAR-X and TanDEM-X. Depending on the chosen acquisition mode and flight geometry, products of different quality can be achieved. Eight InSAR data sets showing different characteristics in geometric resolution, length of baseline, and mapping time, are processed and analyzed. To enable a thorough evaluation of the estimated heights, first-pulse LIDAR point clouds and aerial ortho-images are used as reference data.

  8. Evaluation of fuzzy relation method for medical decision support.

    PubMed

    Wagholikar, Kavishwar; Mangrulkar, Sanjeev; Deshpande, Ashok; Sundararajan, Vijayraghavan

    2012-02-01

    The potential of computer based tools to assist physicians in medical decision making, was envisaged five decades ago. Apart from factors like usability, integration with work-flow and natural language processing, lack of decision accuracy of the tools has hindered their utility. Hence, research to develop accurate algorithms for medical decision support tools, is required. Pioneering research in last two decades, has demonstrated the utility of fuzzy set theory for medical domain. Recently, Wagholikar and Deshpande proposed a fuzzy relation based method (FR) for medical diagnosis. In their case studies for heart and infectious diseases, the FR method was found to be better than naive bayes (NB). However, the datasets in their studies were small and included only categorical symptoms. Hence, more evaluative studies are required for drawing general conclusions. In the present paper, we compare the classification performance of FR with NB, for a variety of medical datasets. Our results indicate that the FR method is useful for classification problems in the medical domain, and that FR is marginally better than NB. However, the performance of FR is significantly better for datasets having high proportion of unknown attribute values. Such datasets occur in problems involving linguistic information, where FR can be particularly useful. Our empirical study will benefit medical researchers in the choice of algorithms for decision support tools.

  9. Contamination of nanoparticles by endotoxin: evaluation of different test methods

    PubMed Central

    2012-01-01

    Background Nanomaterials can be contaminated with endotoxin (lipopolysaccharides, LPS) during production or handling. In this study, we searched for a convenient in vitro method to evaluate endotoxin contamination in nanoparticle samples. We assessed the reliability of the commonly used limulus amebocyte lysate (LAL) assay and an alternative method based on toll-like receptor (TLR) 4 reporter cells when applied with particles (TiO2, Ag, CaCO3 and SiO2), or after extraction of the endotoxin as described in the ISO norm 29701. Results Our results indicate that the gel clot LAL assay is easily disturbed in the presence of nanoparticles; and that the endotoxin extraction protocol is not suitable at high particle concentrations. The chromogenic-based LAL endotoxin detection systems (chromogenic LAL assay and Endosafe-PTS), and the TLR4 reporter cells were not significantly perturbed. Conclusion We demonstrated that nanoparticles can interfere with endotoxin detection systems indicating that a convenient test method must be chosen before assessing endotoxin contamination in nanoparticle samples. PMID:23140310

  10. Mechanistic Based DEM Simulation of Particle Attrition in a Jet Cup

    SciTech Connect

    Xu, Wei; DeCroix, David; Sun, Xin

    2014-02-01

    The attrition of particles is a major industrial concern in many fluidization systems as it can have undesired effects on the product quality and on the reliable operation of process equipment. Therefore, to accomodate the screening and selection of catalysts for a specific process in fluidized beds, risers, or cyclone applications, their attrition propensity is usually estimated through jet cup attrition testing, where the test material is subjected to high gas velocities in a jet cup. However, this method is far from perfect despite its popularity, largely due to its inconsistency in different testing set-ups. In order to better understand the jet cup testing results as well as their sensitivity to different operating conditions, a coupled computational fluid dynamic (CFD) - discrete element method (DEM) model has been developed in the current study to investigate the particle attrition in a jet cup and its dependence on various factors, e.g. jet velocity, initial particle size, particle density, and apparatus geometry.

  11. DEM generation from digital photographs using computer vision: Accuracy and application

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.

    2012-12-01

    Data for detailed digital elevation models (DEMs) are usually collected by expensive laser-based techniques, or by photogrammetric methods that require expertise and specialist software. However, recent advances in computer vision research now permit 3D models to be automatically derived from unordered collections of photographs, and offer the potential for significantly cheaper and quicker DEM production. Here, we review the advantages and limitations of this approach and, using imagery of the summit craters of Piton de la Fournaise, compare the precisions obtained with those from formal close range photogrammetry. The surface reconstruction process is based on a combination of structure-from-motion and multi-view stereo algorithms (SfM-MVS). Using multiple photographs of a scene taken from different positions with a consumer-grade camera, dense point clouds (millions of points) can be derived. Processing is carried out by automated 'reconstruction pipeline' software downloadable from the internet. Unlike traditional photogrammetric approaches, the initial reconstruction process does not require the identification of any control points or initial camera calibration and is carried out with little or no operator intervention. However, such reconstructions are initially un-scaled and un-oriented so additional software has been developed to permit georeferencing. Although this step requires the presence of some control points or features within the scene, it does not have the relatively strict image acquisition and control requirements of traditional photogrammetry. For accuracy, and to allow error analysis, georeferencing observations are made within the image set, rather than requiring feature matching within the point cloud. Application of SfM-MVS is demonstrated using images taken from a microlight aircraft over the summit of Piton de la Fournaise volcano (courtesy of B. van Wyk de Vries). 133 images, collected with a Canon EOS D60 and 20 mm fixed focus lens, were

  12. Turbidity Current Transport using DEM and FEM: a Hybrid Lagrangian-Eulerian Approach

    NASA Astrophysics Data System (ADS)

    Alves, J. L.; Guevara, N. O., Jr.; Silva, C. E.; Alves, F. T.; Gazoni, L. C.; Coutinho, A.; Camata, J.; Elias, R. N.; Paraizo, P.

    2013-05-01

    In this work we describe a contribution to the study of turbidity transport in scales smaller than TFM (two-fluid models), The intent of the work, part of a large scale simulation project, is to assess local, small scale parameters and their upscaling. The hybrid model is based on a Lagrangian-Eulerian approach under a class of the so called Unresolved Discrete Particle Method (UDPM). In this approach, a Lagrangian description is used for the particle system employing the Discrete Element Method (DEM) while a fixed Eulerian mesh is used for the fluid phase modeled by finite element method (FEM), Fluid motion is governed by Navier-Stokes equations which are solved by an appropriate FEM implementation. Closure equation are used to compute drag and lift forces over the particles in the DEM framework. Volume averaged momentum sink terms are included in the fluid equations. The resulting coupled DEM-FEM model is integrated in time with a subcycling scheme. The aforementioned scheme was applied in the simulation of a sedimentation basin as depicted in figures 1 and 2 to investigate flow and deposition features of the suspension in a finer scale. For this purpose a submodel of the basin was generated. Mapping variables back and forth the Eulerian (finite element) model and the Lagrangian (discrete element) model were performed during the subcycled integration of the hybrid model. References: [1] Hoomans, B.P.B., Kuipers, J.A.M., Swaaij, van W.P.M," Granular dynamics Simulation of segregation phenomena in bubbling gas-fluidised beds", Powder Technology, V 109, Issues 1-3, 3 April 2000, pp 41-48; [2] Cho, S.H., Choi,H.G, Yoo, J.Y.,"Direct numerical simulation of fluid flow laden with many particles", International Journal of Multiphase Flow, V 31, Issue 4, April 2005, pp 435-451;; Sedimentation basin: sectioning the turbidity plume in the Eulerian FE model for setting up the discrete particle model. ; Sedimentation Basin: section of the turbidity plume displaying the

  13. Open-source MFIX-DEM software for gas-solids flows: Part I verification studies

    SciTech Connect

    Garg, Rahul; Galvin, Janine; Li, Tingwen; Pannala, Sreekanth

    2012-01-01

    With rapid advancements in computer hardware, it is now possible to perform large simulations of granular flows using the Discrete Element Method (DEM). As a result, solids are increasingly treated in a discrete Lagrangian fashion in the gas solids flow community. In this paper, the open-source MFIX-DEM software is described that can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles. This method is referred to as the continuum discrete method (CDM) to clearly make a distinction between the ambiguity of using a Lagrangian or Eulerian reference for either continuum or discrete formulations. This freely available CDM code for gas solids flows can accelerate the research in computational gas solids flows and establish a baseline that can lead to better closures for the continuum modeling (or traditionally referred to as two fluid model) of gas solids flows. In this paper, a series of verification cases is employed which tests the different aspects of the code in a systematic fashion by exploring specific physics in gas solids flows before exercising the fully coupled solution on simple canonical problems. It is critical to have an extensively verified code as the physics is complex with highly-nonlinear coupling, and it is difficult to ascertain the accuracy of the results without rigorous verification. These series of verification tests set the stage not only for rigorous validation studies (performed in part II of this paper) but also serve as a procedure for testing any new developments that couple continuum and discrete formulations for gas solids flows.

  14. Open-source MFIX-DEM software for gas-solids flows: Part 1 - Verification studies

    SciTech Connect

    Garg, Rahul; Galvin, Janine; Li, Tingwen; Pannala, Sreekanth

    2012-04-01

    With rapid advancements in computer hardware, it is now possible to perform large simulations of granular flows using the Discrete Element Method (DEM). As a result, solids are increasingly treated in a discrete Lagrangian fashion in the gas–solids flow community. In this paper, the open-source MFIX-DEM software is described that can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles. This method is referred to as the continuum discrete method (CDM) to clearly make a distinction between the ambiguity of using a Lagrangian or Eulerian reference for either continuum or discrete formulations. This freely available CDM code for gas–solids flows can accelerate the research in computational gas–solids flows and establish a baseline that can lead to better closures for the continuum modeling (or traditionally referred to as two fluid model) of gas–solids flows. In this paper, a series of verification cases is employed which tests the different aspects of the code in a systematic fashion by exploring specific physics in gas–solids flows before exercising the fully coupled solution on simple canonical problems. It is critical to have an extensively verified code as the physics is complex with highly-nonlinear coupling, and it is difficult to ascertain the accuracy of the results without rigorous verification. These series of verification tests set the stage not only for rigorous validation studies (performed in part II of this paper) but also serve as a procedure for testing any new developments that couple continuum and discrete formulations for gas–solids flows.

  15. Open Source MFIX-DEM Software for Gas-Solids Flows: Part 1 - Verification Studies

    SciTech Connect

    Garg, Rahul; Galvin, Janine; Li, Tingwen; Pannala, Sreekanth

    2012-04-01

    With rapid advancements in computer hardware, it is now possible to perform large simulations of granular flows using the Discrete Element Method (DEM). As a result, solids are increasingly treated in a discrete Lagrangian fashion in the gas–solids flow community. In this paper, the open-source MFIX-DEM software is described that can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles. This method is referred to as the continuum discrete method (CDM) to clearly make a distinction between the ambiguity of using a Lagrangian or Eulerian reference for either continuum or discrete formulations. This freely available CDM code for gas–solids flows can accelerate the research in computational gas–solids flows and establish a baseline that can lead to better closures for the continuum modeling (or traditionally referred to as two fluid model) of gas–solids flows. In this paper, a series of verification cases is employed which tests the different aspects of the code in a systematic fashion by exploring specific physics in gas–solids flows before exercising the fully coupled solution on simple canonical problems. It is critical to have an extensively verified code as the physics is complex with highly-nonlinear coupling, and it is difficult to ascertain the accuracy of the results without rigorous verification. These series of verification tests set the stage not only for rigorous validation studies (performed in part II of this paper) but also serve as a procedure for testing any new developments that couple continuum and discrete formulations for gas–solids flows.

  16. Evaluation of heart rate changes: electrocardiographic versus photoplethysmographic methods

    NASA Technical Reports Server (NTRS)

    Low, P. A.; Opfer-Gehrking, T. L.; Zimmerman, I. R.; O'Brien, P. C.

    1997-01-01

    The heart rate (HR) variation to forced deep breathing (HRDB) and to the Valsalva maneuver (Valsalva ratio; VR) are the two most widely used tests of cardiovagal function in human subjects. The HR is derived from a continuously running electrocardiographic (ECG) recording. Recently, HR derived from the arterial waveform became available on the Finapres device (FinapHR), but its ability to detect rapid changes in HR remains uncertain. We therefore evaluated HRDB and VR derived from FinapHR using ECG-derived HR (ECGHR) recordings as the standard. We also compared the averaged HR on Finapres (Finapav) with beat-to-beat Finapres (FinapBB) values. Studies were undertaken in 12 subjects with large HR variations: age, 34.5 +/- 9.3 (SD) years; six males and six females. FinapBB values were superimposable upon ECGHR for both HRDB and VR. In contrast, Finapav failed to follow ECGHR for HRDB and followed HRECG with a lag for the VR. To evaluate statistically how closely FinapHR approximated ECGHR, we undertook regression analysis, using mean values for each subject. To compare the two methods, we evaluated the significance of the difference between test and standard values. For HRDB, FinapBB reproducibly recorded HR (R2 = 0.998), and was significantly (p = 0.001) better than Finapav (R2 = 0.616; p < 0.001). For VR, HRBB generated a VR that was not significantly different from the correct values, while HRav generated a value that was slightly but consistently lower than the correct values (p < 0.001). We conclude that FinapHR reliably records HR variations in the beat-to-beat mode for cardiovascular HR tests.

  17. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    PubMed Central

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267

  18. Method for non-destructive evaluation of ceramic coatings

    SciTech Connect

    Peterson, Kristen A.; Rosen, Elias P.; Jordan, Eric H.; Shahbazmohamadi, Sina; Vakhtin, Andrei B.

    2016-11-08

    A method for evaluating the condition of a ceramic coating deposited on a substrate comprising illuminating the ceramic coating with light, measuring the intensity of light returned from the ceramic coating as function of depth in the coating and transverse position on the coating, and analyzing the measured light intensities to obtain one or more of intensity of the light returned from the exposed coating surface relative to the intensity of light returned from the coating/substrate interface, intensity of the light returned from the coating/substrate interface relative to the intensity of light returned from the bulk of the ceramic coating, determination of roughness at the exposed surface of the ceramic coating, and determination of roughness of the interface between the ceramic coating and underlying bond coat or substrate.

  19. Cost Evaluation Method of Wind Turbine Generation System

    NASA Astrophysics Data System (ADS)

    Ichita, Hajime; Takahashi, Rion; Tamura, Junji; Kimura, Mamoru; Ichinose, Masaya; Futami, Moto-O.; Ide, Kazumasa

    In recent years, many wind turbine generation systems (WTGSs) have been installed in many countries from a point of view of grobal environment due to CO2 emission. But wind turbine generator output and annual energy production are dependent on wind characteristic of each area and a kind of WTGS. Authors' previous paper presented the analyses about annual electrical energy production and capacity facotor of WTGS for each area with different wind data. This paper presents a method to calculate each cost of WTGS component such as drive train system, generator and other equipments, and also to evaluate generation cost obtained from WTGS cost and annual electrical energy production. Based on these results, the optimal kind of WTGS can be determined for each installation area from an economical point of view.

  20. Quantitative evaluation of solar wind time-shifting methods

    NASA Astrophysics Data System (ADS)

    Cameron, Taylor; Jackel, Brian

    2016-11-01

    Nine years of solar wind dynamic pressure and geosynchronous magnetic field data are used for a large-scale statistical comparison of uncertainties associated with several different algorithms for propagating solar wind measurements. The MVAB-0 scheme is best overall, performing on average a minute more accurately than a flat time-shift. We also evaluate the accuracy of these time-shifting methods as a function of solar wind magnetic field orientation. We find that all time-shifting algorithms perform significantly worse (>5 min) due to geometric effects when the solar wind magnetic field is radial (parallel or antiparallel to the Earth-Sun line). Finally, we present an empirical scheme that performs almost as well as MVAB-0 on average and slightly better than MVAB-0 for intervals with nonradial B.

  1. An evaluation of methods for scaling aircraft noise perception

    NASA Technical Reports Server (NTRS)

    Ollerhead, J. B.

    1971-01-01

    One hundred and twenty recorded sounds, including jets, turboprops, piston engined aircraft and helicopters were rated by a panel of subjects in a paired comparison test. The results were analyzed to evaluate a number of noise rating procedures in terms of their ability to accurately estimate both relative and absolute perceived noise levels. It was found that the complex procedures developed by Stevens, Zwicker and Kryter are superior to other scales. The main advantage of these methods over the more convenient weighted sound pressure level scales lies in their ability to cope with signals over a wide range of bandwidth. However, Stevens' loudness level scale and the perceived noise level scale both overestimate the growth of perceived level with intensity because of an apparent deficiency in the band level summation rule. A simple correction is proposed which will enable these scales to properly account for the experimental observations.

  2. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    PubMed

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.

  3. Gaussian beam profile shaping apparatus, method therefor and evaluation thereof

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.; Romero, Louis A.

    1999-01-01

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system.

  4. Gaussian beam profile shaping apparatus, method therefore and evaluation thereof

    DOEpatents

    Dickey, F.M.; Holswade, S.C.; Romero, L.A.

    1999-01-26

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system. 27 figs.

  5. An effective method for incoherent scattering radar's detecting ability evaluation

    NASA Astrophysics Data System (ADS)

    Lu, Ziqing; Yao, Ming; Deng, Xiaohua

    2016-06-01

    Ionospheric incoherent scatter radar (ISR), which is used to detect ionospheric electrons and ions, generally, has megawatt class transmission power and hundred meter level antenna aperture. The crucial purpose of this detecting technology is to get ionospheric parameters by acquiring the autocorrelation function and power spectrum of the target ionospheric plasma echoes. Whereas the ISR's echoes are very weak because of the small radar cross section of its target, estimating detecting ability will be significantly instructive and meaningful for ISR system design. In this paper, we evaluate the detecting ability through signal-to-noise ratio (SNR). The soft-target radar equation is deduced to be applicable to ISR, through which we use data from International Reference Ionosphere model to simulate signal-to-noise ratio (SNR) of echoes, and then comparing the measured SNR from European Incoherent Scatter Scientific Association and Advanced Modular Incoherent Scatter Radar with the simulation. The simulation results show good consistency with the measured SNR. For ISR, the topic of this paper is the first comparison between the calculated SNR and radar measurements; the detecting ability can be improved through increasing SNR. The effective method for ISR's detecting ability evaluation provides basis for design of radar system.

  6. Study Methods to Characterize and Implement Thermography Nondestructive Evaluation (NDE)

    NASA Technical Reports Server (NTRS)

    Walker, James L.

    1998-01-01

    The limits and conditions under which an infrared thermographic nondestructive evaluation can be utilized to assess the quality of aerospace hardware is demonstrated in this research effort. The primary focus of this work is on applying thermography to the inspection of advanced composite structures such as would be found in the International Space Station Instrumentation Racks, Space Shuttle Cargo Bay Doors, Bantam RP-1 tank or RSRM Nose Cone. Here, the detection of delamination, disbond, inclusion and porosity type defects are of primary interest. In addition to composites, an extensive research effort has been initiated to determine how well a thermographic evaluation can detect leaks and disbonds in pressurized metallic systems "i.e. the Space Shuttle Main Engine Nozzles". In either case, research into developing practical inspection procedures was conducted and thermographic inspections were performed on a myriad of test samples, subscale demonstration articles and "simulated" flight hardware. All test samples were fabricated as close to their respective structural counterparts as possible except with intentional defects for NDE qualification. As an added benefit of this effort to create simulated defects, methods were devised for defect fabrication that may be useful in future NDE qualification ventures.

  7. Development of fatigue life evaluation method using small specimen

    NASA Astrophysics Data System (ADS)

    Nogami, Shuhei; Nishimura, Arata; Wakai, Eichi; Tanigawa, Hiroyasu; Itoh, Takamoto; Hasegawa, Akira

    2013-10-01

    For developing the fatigue life evaluation method using small specimen, the effect of specimen size and shape on the fatigue life of the reduced activation ferritic/martensitic steels (F82H-IEA, F82H-BA07 and JLF-1) was investigated by the fatigue test at room temperature in air using round-bar and hourglass specimens with various specimen sizes (test section diameter: 0.85-10 mm). The round-bar specimen showed no specimen size and no specimen shape effects on the fatigue life, whereas the hourglass specimen showed no specimen size effect and obvious specimen shape effect on it. The shorter fatigue life of the hourglass specimen observed under low strain ranges could be attributed to the shorter micro-crack initiation life induced by the stress concentration dependent on the specimen shape. On the basis of this study, the small round-bar specimen was an acceptable candidate for evaluating the fatigue life using small specimen.

  8. A method for the evaluation of wide dynamic range cameras

    NASA Astrophysics Data System (ADS)

    Wong, Ping Wah; Lu, Yu Hua

    2012-01-01

    We propose a multi-component metric for the evaluation of digital or video cameras under wide dynamic range (WDR) scenes. The method is based on a single image capture using a specifically designed WDR test chart and light box. Test patterns on the WDR test chart include gray ramps, color patches, arrays of gray patches, white bars, and a relatively dark gray background. The WDR test chart is professionally made using 3 layers of transparencies to produce a contrast ratio of approximately 110 dB for WDR testing. A light box is designed to provide a uniform surface with light level at about 80K to 100K lux, which is typical of a sunny outdoor scene. From a captured image, 9 image quality component scores are calculated. The components include number of resolvable gray steps, dynamic range, linearity of tone response, grayness of gray ramp, number of distinguishable color patches, smearing resistance, edge contrast, grid clarity, and weighted signal-to-noise ratio. A composite score is calculated from the 9 component scores to reflect the comprehensive image quality in cameras under WDR scenes. Experimental results have demonstrated that the multi-component metric corresponds very well to subjective evaluation of wide dynamic range behavior of cameras.

  9. The evaluation method for antiplatelet effect of acetylsalicylic acid.

    PubMed

    Yokoyama, Haruko; Mastumura, Takashi; Soeda, Shinji; Suzuki, Yuji; Watanabe, Masayuki; Kashiwakura, Emiko; Saso, Takayuki; Ikeda, Noriyuki; Tokuoka, Kentaro; Kitagawa, Yasuhisa; Yamada, Yasuhiko

    2014-12-01

    Reduced platelet aggregation by acetylsalicylic acid administration has been associated with adverse outcomes in patients with thrombotic diseases, thus it is important to determine aspirin resistance in those cases. The antiplatelet effect of acetylsalicylic acid is rarely measured, but it has many problems. The aim of this study was to find the evaluation method for antiplatelet effect after administration of acetylsalicylic acid. We developed a particle counting method based upon laser light scattering, and utilized the platelet aggregation agonists, collagen, at 0.25, 0.5 and 1.0 μg/mL, and adenosine diphosphate (ADP), at 0.5, 1.0 and 2.0 μM, to determine their effective concentrations. Seventeen healthy volunteers were administered acetylsalicylic acid at 162 mg/day, with platelet aggregation determined before and 20 min after administration. In all subjects, the rate of platelet aggregation induced by 1.0 μg/mL of collagen before taking acetylsalicylic acid was the highest value obtained, while 20 min after acetylsalicylic acid administration, aggregation induced by collagen at 1.0 μg/mL was significantly decreased as compared to before administration. As for the other concentrations of collagen and all those of ADP tested, platelet aggregation was either not significantly induced before taking acetylsalicylic acid or the rate of aggregation was not significantly decreased after taking acetylsalicylic acid. Our results indicate that collagen at 1.0 μg/mL is appropriate as a platelet aggregation agonist for evaluating the antiplatelet effect of acetylsalicylic acid. Thus, it is useful that the measurement is performed only once.

  10. Detecting Blind Fault with Fractal and Roughness Factors from High Resolution LiDAR DEM at Taiwan

    NASA Astrophysics Data System (ADS)

    Cheng, Y. S.; Yu, T. T.

    2014-12-01

    There is no obvious fault scarp associated with blind fault. The traditional method of mapping this unrevealed geological structure is the cluster of seismicity. Neither the seismic event nor the completeness of cluster could be captured by network to chart the location of the entire possible active blind fault within short period of time. High resolution DEM gathered by LiDAR could denote actual terrain information despite the existence of plantation. 1-meter interval DEM of mountain region at Taiwan is utilized by fractal, entropy and roughness calculating with MATLAB code. By jointing these handing, the regions of non-sediment deposit are charted automatically. Possible blind fault associated with Chia-Sen earthquake at southern Taiwan is served as testing ground. GIS layer help in removing the difference from various geological formation, then multi-resolution fractal index is computed around the target region. The type of fault movement controls distribution of fractal index number. The scale of blind fault governs degree of change in fractal index. Landslide induced by rainfall and/or earthquake possesses larger degree of geomorphology alteration than blind fault; special treatment in removing these phenomena is required. Highly weathered condition at Taiwan should erase the possible trace remained upon DEM from the ruptured of blind fault while reoccurrence interval is higher than hundreds of years. This is one of the obstacle in finding possible blind fault at Taiwan.

  11. An Evaluation of Installation Methods for STS-1 Seismometers

    USGS Publications Warehouse

    Holcomb, L. Gary; Hutt, Charles R.

    1992-01-01

    INTRODUCTION This report documents the results of a series of experiments conducted by the authors at the Albuquerque Seismological Laboratory (ASl) during the spring and summer of 1991; the object of these experiments was to obtain and document quantitative performance comparisons of three methods of installing STS-1 seismometers. Historically, ASL has installed STS-1 sensors by cementing their thick glass base plates to the concrete floor of the vault (see Peterson and Tilgner, 1985, p 44 and Figure 31, p 51 for the details of this installation technique). This installation technique proved to be fairly satisfactory for the China Digital Seismic Network and for several sets of STS-1 sensors installed in other locations since that time. However, the cementing operation is rather labor intensive and the concrete requires a lengthy (about 1 week) curing time during which the sensor installed on it is noisy. In addition it is difficult to assure that all air bubbles have been removed from the interface between the cement and the glass base plate. If air bubbles are present beneath the plate, horizontal sensors can be unacceptably noisy. Moving a sensor installed in this manner requires the purchase of a new glass base plate because the old plate normally can not be removed without breakage. Therefore, this study was undertaken with the aim of developing an improved method of installing STS-1's. The goals were to develop a method which requires less field site labor during the installation and assures a higher quality installation when finished. In addition, the improved installation technique should promote portability. Two alternate installation techniques were evaluated in this study. One method replaces the cement between the base plate and the vault floor with sand. This method has been used in the French Geoscope program and in several IRIS/IDA installations made by the University of California at San Diego (UCSD) and possibly others. It is easily implemented in

  12. Evaluation of Microsporum canis in different methods of storage.

    PubMed

    Brilhante, R S N; Cavalcante, C S P; Soares-Júnior, F A; Monteiro, A J; Brito, E H S; Cordeiro, R A; Sidrim, J J C; Rocha, M F G

    2004-12-01

    The main objective of this investigation was to evaluate different methods of storage for Microsporum canis based on materials and equipment that are readily available in developing countries. We tested 32 strains of M. canis at - 20 degrees C in potato dextrose agar (PDA) in its plain condition, or amended with 10% dimethyl sulfoxide or with 10% glycerol. In addition, we tested 25 degrees C storage of isolates in plain saline (0.9% NaCl) and in saline covered with a mineral-oil layer. After 9 months of storage, none of the M. canis strains frozen in PDA supplemented with glycerol survived, while only 16 and 6%, respectively, of the isolates in plain and DMSO medium lost viability. Nine month's storage in saline with or without mineral oil increased the amount of pleomorphic development of sterile hyphae; this phenomenon occurred at a significantly higher level than was seen in isolates stored at -20 degrees C. The physiological characteristics of M. canis were not affected by the different storage tests. The results suggest that, in order to ensure optimal viability, purity and pristine isolate condition, each M. canis isolate maintained should be held in at least two methods of storage, namely, PDA at -20 degrees C and saline with a mineral-oil layer at 25 degrees C.

  13. Nondestructive Evaluation Methods for the Ares I Common Bulkhead

    NASA Technical Reports Server (NTRS)

    Walker, James

    2010-01-01

    A large scale bonding demonstration test article was fabricated to prove out manufacturing techniques for the current design of the NASA Ares I Upper Stage common bulkhead. The common bulkhead serves as the single interface between the liquid hydrogen and liquid oxygen portions of the Upper Stage propellant tank. The bulkhead consists of spin-formed aluminum domes friction stir welded to Y-rings and bonded to a perforated phenolic honeycomb core. Nondestructive evaluation methods are being developed for assessing core integrity and the core-to-dome bond line of the common bulkhead. Detection of manufacturing defects such as delaminations between the core and face sheets as well as service life defects such as crushed or sheared core resulting from impact loading are all of interest. The focus of this work will be on the application of thermographic, shearographic, and phased array ultrasonic methods to the bonding demonstration article as well as various smaller test panels featuring design specific defect types and geometric features.

  14. A simple capacitive method to evaluate ethanol fuel samples

    NASA Astrophysics Data System (ADS)

    Vello, Tatiana P.; de Oliveira, Rafael F.; Silva, Gustavo O.; de Camargo, Davi H. S.; Bufon, Carlos C. B.

    2017-02-01

    Ethanol is a biofuel used worldwide. However, the presence of excessive water either during the distillation process or by fraudulent adulteration is a major concern in the use of ethanol fuel. High water levels may cause engine malfunction, in addition to being considered illegal. Here, we describe the development of a simple, fast and accurate platform based on nanostructured sensors to evaluate ethanol samples. The device fabrication is facile, based on standard microfabrication and thin-film deposition methods. The sensor operation relies on capacitance measurements employing a parallel plate capacitor containing a conformational aluminum oxide (Al2O3) thin layer (15 nm). The sensor operates over the full range water concentration, i.e., from approximately 0% to 100% vol. of water in ethanol, with water traces being detectable down to 0.5% vol. These characteristics make the proposed device unique with respect to other platforms. Finally, the good agreement between the sensor response and analyses performed by gas chromatography of ethanol biofuel endorses the accuracy of the proposed method. Due to the full operation range, the reported sensor has the technological potential for use as a point-of-care analytical tool at gas stations or in the chemical, pharmaceutical, and beverage industries, to mention a few.

  15. Comparison of methods for evaluation of experimentally induced emphysema

    SciTech Connect

    Busch, R.H.; Buschbom, R.L.; Smith, L.G.

    1984-04-01

    Four methods to quantify induced emphysema, in a manner economically applicable to large numbers of animals, are compared by correlation analyses. Lung tissue used was from rats pretreated intratracheally with elastase or saline prior to exposure to air or (NH/sub 4/)/sub 2/SO/sub 4/ or NH/sub 4/NO/sub 3/ aerosols. The most sensitive quantitative evaluation was from mean chord length (MCL) measurements on scanning electron micrographs (SEM). Four-corner and parallel-line grids provided similar results, and reducing sample size to one selected field per lobe yielded a high degree of reliability for MCL measurements. Alveolar-pore perimeter and area (also measured on SEM photographs) were increased by induced emphysema, but were not reliable indicators for degree of pulmonary involvement. Both subjective score (grading the degree of emphysema) and percentage-area-affected determinations indicated the presence of emphysema, but with less sensitivity than MCL measurements. However, these two subgross methods (performed with a dissecting microscope) provided valuable information on the distribution of pulmonary lesions; emphysema was induced in a nonuniform but consistent and progressive pattern in the two lobes of the lung studied. 23 studied.

  16. A simple capacitive method to evaluate ethanol fuel samples

    PubMed Central

    Vello, Tatiana P.; de Oliveira, Rafael F.; Silva, Gustavo O.; de Camargo, Davi H. S.; Bufon, Carlos C. B.

    2017-01-01

    Ethanol is a biofuel used worldwide. However, the presence of excessive water either during the distillation process or by fraudulent adulteration is a major concern in the use of ethanol fuel. High water levels may cause engine malfunction, in addition to being considered illegal. Here, we describe the development of a simple, fast and accurate platform based on nanostructured sensors to evaluate ethanol samples. The device fabrication is facile, based on standard microfabrication and thin-film deposition methods. The sensor operation relies on capacitance measurements employing a parallel plate capacitor containing a conformational aluminum oxide (Al2O3) thin layer (15 nm). The sensor operates over the full range water concentration, i.e., from approximately 0% to 100% vol. of water in ethanol, with water traces being detectable down to 0.5% vol. These characteristics make the proposed device unique with respect to other platforms. Finally, the good agreement between the sensor response and analyses performed by gas chromatography of ethanol biofuel endorses the accuracy of the proposed method. Due to the full operation range, the reported sensor has the technological potential for use as a point-of-care analytical tool at gas stations or in the chemical, pharmaceutical, and beverage industries, to mention a few. PMID:28240312

  17. Physical methods for evaluating the nutrition status of hemodialysis patients.

    PubMed

    Marcelli, Daniele; Wabel, Peter; Wieskotten, Sebastian; Ciotola, Annalisa; Grassmann, Aileen; Di Benedetto, Attilio; Canaud, Bernard

    2015-10-01

    This article aims to provide an overview of the different nutritional markers and the available methodologies for the physical assessment of nutrition status in hemodialysis patients, with special emphasis on early detection of protein energy wasting (PEW). Nutrition status assessment is made on the basis of anamnesis, physical examination, evaluation of nutrient intake, and on a selection of various screening/diagnostic methodologies. These methodologies can be subjective, e.g. the Subjective Global Assessment score (SGA), or objective in nature (e.g. bioimpedance analysis). In addition, certain biochemical tests may be employed (e.g. albumin, pre-albumin). The various subjective-based and objective methodologies provide different insights for the assessment of PEW, particularly regarding their propensity to differentiate between the important body composition compartments-fluid overload, fat mass and muscle mass. This review of currently available methods showed that no single approach and no single marker is able to detect alterations in nutrition status in a timely fashion and to follow such changes over time. The most clinically relevant approach presently appears to be the combination of the SGA method with the bioimpedance spectroscopy technique with physiological model and, additionally, laboratory tests for the detection of micro-nutrient deficiency.

  18. Fish Passage though Hydropower Turbines: Simulating Blade Strike using the Discrete Element Method

    SciTech Connect

    Richmond, Marshall C.; Romero Gomez, Pedro DJ

    2014-12-08

    mong the hazardous hydraulic conditions affecting anadromous and resident fish during their passage though turbine flows, two are believed to cause considerable injury and mortality: collision on moving blades and decompression. Several methods are currently available to evaluate these stressors in installed turbines, i.e. using live fish or autonomous sensor devices, and in reduced-scale physical models, i.e. registering collisions from plastic beads. However, a priori estimates with computational modeling approaches applied early in the process of turbine design can facilitate the development of fish-friendly turbines. In the present study, we evaluated the frequency of blade strike and nadir pressure environment by modeling potential fish trajectories with the Discrete Element Method (DEM) applied to fish-like composite particles. In the DEM approach, particles are subjected to realistic hydraulic conditions simulated with computational fluid dynamics (CFD), and particle-structure interactions—representing fish collisions with turbine blades—are explicitly recorded and accounted for in the calculation of particle trajectories. We conducted transient CFD simulations by setting the runner in motion and allowing for better turbulence resolution, a modeling improvement over the conventional practice of simulating the system in steady state which was also done here. While both schemes yielded comparable bulk hydraulic performance, transient conditions exhibited a visual improvement in describing flow variability. We released streamtraces (steady flow solution) and DEM particles (transient solution) at the same location from where sensor fish (SF) have been released in field studies of the modeled turbine unit. The streamtrace-based results showed a better agreement with SF data than the DEM-based nadir pressures did because the former accounted for the turbulent dispersion at the intake but the latter did not. However, the DEM-based strike frequency is more

  19. Implementation of large-scale landscape evolution modelling to real high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Schroeder, S.; Babeyko, A. Y.

    2012-12-01

    We have developed a surface evolution model to be naturally integrated with 3D thermomechanical codes like SLIM-3D to study coupled tectonic-climate interaction. The resolution of the surface evolution model is independent of that of the underlying continuum box. The surface model follows the concept of the cellular automaton implemented on a regular Eulerian mesh. It incorporates an effective filling algorithm that guarantees flow direction in each cell, D8 search for flow directions, computation of discharges and bedrock incision. Additionally, the model implements hillslope erosion in the form of non-linear, slope-dependent diffusion. The model was designed to be employed not only to synthetic topographies but also to real Digital Elevation Models (DEM). In present work we report our experience with model implication to the 30-meter resolution ASTER GDEM of the Pamir orogen, in particular, to the segment of the Panj river. We start with calibration of the model parameters (fluvial incision and hillslope diffusion coefficients) using direct measurements of Panj incision rates and volumes of suspended sediment transport. Since the incision algorithm is independent on hillslope processes, we first adjust the incision parameters. Power-law exponents of the incision equation were evaluated from the profile curvature of the main Pamir rivers. After that, incision coefficient was adjusted to fit the observed incision rate of 5 mm/y. Once the model results are consistent with the measured data, the calibration of hillslope processes follows. For given critical slope, diffusivity could be fitted to match the observed sediment discharge. Applying of surface evolution model to real DEM reveals specific problems which do not appear when working with synthetic landscapes. One of them is the noise of the satellite-measured topography. In particular, due to the non-vertical observation perspective, satellite may not be able to detect the bottom of the river channel, especially

  20. Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung

    2014-05-01

    -based accuracy assessment is conducted by quantitatively comparing extracted landslide objects with landslide polygons that were visually interpreted by local experts. The applicability and transferability of the mapping system are evaluated by comparing initial accuracies with those achieved for the following two tests: first, usage of a SPOT image from the same year, but for a different area within the Baichi catchment; second, usage of SPOT images from multiple years for the same region. The integration of the common knowledge via digital landslide signatures is new in object-based landslide studies. In combination with strategies to optimize image segmentation this may lead to a more objective, transferable and stable knowledge-based system for the mapping of landslides from optical satellite data and DEMs.

  1. Quality of DEMs derived from Kite Aerial Photogrammety System: a case study of Dutch coastal environments.

    NASA Astrophysics Data System (ADS)

    Paron, Paolo; Smith, Mike J.; Anders, Niels; Meesuk, Vorawit

    2014-05-01

    Coastal protection is one of the main challenges for the Netherlands, where a large proportion of anthropogenic activity is located below sea level (both residential and economic). The Dutch government is implementing an innovative method of coastal replenishment using natural waves and winds to relocate sand from one side to the other of the country. This requires close monitoring of the spatio-temporal evolution of beaches in order to correctly model the future direction and amount of sand movement. To do so -on the onshore beach- we tested a Kite-Aerial Photography System for monitoring the beach dynamics at Zandmotor (http://www.dezandmotor.nl/en-GB/). The equipment used for data collection were a commercial DSLR camera (Nikon D7000 with a 20mm lens), gyro-levelled rig, Sutton Flowform 16 kite and Leica GNSS Viva GS10, with GSM connection to the Dutch geodetic network. We flew using a 115 m line with an average inclination of 40 to 45°; this gave a camera vertical distance of ~80 m and pixel size of ~20 mm. The methodology follows that of Smith et al. (2009), and of Paron & Smith (2013), applied to a highly dynamic environment with low texture and small relief conditions. Here we present a comparison of the quality of the digital elevation model (DEM) generated from the same dataset using two different systems: Structure from Motion (SfM) using Agisoft Photoscan Pro and traditional photogrammetry using Leica Photograpmmetry Suite. In addition the outputs from the two data processing methods are presented, including both an image mosaic and DEM, and highlighting pros and cons of both methods. References Smith, M. J. et al. 2009. High spatial resolution data acquisition for the geosciences: kite aerial photography. ESPL, 34(1), 155-161. Paron, P., Smith, M.J. 2013. Kite aerial photogrammetry system for monitoring coastal change in the Netherlands. 8th IAG International Conference on Geomorphology, Paris, August.

  2. What Do We Do Now That the Evaluation Is Over? Methods for Transitioning the Responsibility of Evaluation to Program Staff.

    ERIC Educational Resources Information Center

    Cassata, Jennifer Coyne; Siddens, Stephanie K.

    This paper describes the methods engaged in by an internal evaluation unit within a large school district to transition program staff from participating in a formal program evaluation to continuing the responsibility of program monitoring once an evaluation ends. Formal multiyear program evaluations can provide program managers and staff with…

  3. Evaluation of different field methods for measuring soil water infiltration

    NASA Astrophysics Data System (ADS)

    Pla-Sentís, Ildefonso; Fonseca, Francisco

    2010-05-01

    Soil infiltrability, together with rainfall characteristics, is the most important hydrological parameter for the evaluation and diagnosis of the soil water balance and soil moisture regime. Those balances and regimes are the main regulating factors of the on site water supply to plants and other soil organisms and of other important processes like runoff, surface and mass erosion, drainage, etc, affecting sedimentation, flooding, soil and water pollution, water supply for different purposes (population, agriculture, industries, hydroelectricity), etc. Therefore the direct measurement of water infiltration rates or its indirect deduction from other soil characteristics or properties has become indispensable for the evaluation and modelling of the previously mentioned processes. Indirect deductions from other soil characteristics measured under laboratory conditions in the same soils, or in other soils, through the so called "pedo-transfer" functions, have demonstrated to be of limited value in most of the cases. Direct "in situ" field evaluations have to be preferred in any case. In this contribution we present the results of past experiences in the measurement of soil water infiltration rates in many different soils and land conditions, and their use for deducing soil water balances under variable climates. There are also presented and discussed recent results obtained in comparing different methods, using double and single ring infiltrometers, rainfall simulators, and disc permeameters, of different sizes, in soils with very contrasting surface and profile characteristics and conditions, including stony soils and very sloping lands. It is concluded that there are not methods universally applicable to any soil and land condition, and that in many cases the results are significantly influenced by the way we use a particular method or instrument, and by the alterations in the soil conditions by the land management, but also due to the manipulation of the surface

  4. Evaluation of internal noise methods for Hotelling observer models

    SciTech Connect

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-08-15

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality.

  5. Evaluation of methods for the assessment of attention while driving.

    PubMed

    Kircher, Katja; Ahlstrom, Christer

    2017-03-21

    The ability to assess the current attentional state of the driver is important for many aspects of driving, not least in the field of partial automation for transfer of control between vehicle and driver. Knowledge about the driver's attentional state is also necessary for the assessment of the effects of additional tasks on attention. The objective of this paper is to evaluate different methods that can be used to assess attention, first theoretically, and then empirically in a controlled field study and in the laboratory. Six driving instructors participated in all experimental conditions of the study, delivering within-subjects data for all tested methods. Additional participants were recruited for some of the conditions. The test route consisted of 14km of motorway with low to moderate traffic, which was driven three times per participant per condition. The on-road conditions were: baseline, driving with eye tracking and self-paced visual occlusion, and driving while thinking aloud. The laboratory conditions were: Describing how attention should be distributed on a motorway, and thinking aloud while watching a video from the baseline drive. The results show that visual occlusion, especially in combination with eye tracking, was appropriate for assessing spare capacity. The think aloud protocol was appropriate to gain insight about the driver's actual mental representation of the situation at hand. Expert judgement in the laboratory was not reliable for the assessment of drivers' attentional distribution in traffic. Across all assessment techniques, it is evident that meaningful assessment of attention in a dynamic traffic situation can only be achieved when the infrastructure layout, surrounding road users, and intended manoeuvres are taken into account. This requires advanced instrumentation of the vehicle, and subsequent data reduction, analysis and interpretation are demanding. In conclusion, driver attention assessment in real traffic is a complex task, but

  6. New method for evaluating high-quality fog protective coatings

    NASA Astrophysics Data System (ADS)

    Czeremuszkin, Grzegorz; Latreche, Mohamed; Mendoza-Suarez, Guillermo

    2011-05-01

    Fogging is commonly observed when humid-warm air contacts the cold surface of a transparent substrate, i.e. eyewear lenses, making the observed image blurred and hazy. To protect from fogging, the lens inner surfaces are protected with Anti-Fog coatings, which render them hydrophilic and induce water vapor condensation as a smooth, thin and invisible film, which uniformly flows down on the lens as the condensation progresses. Coatings differ in protection level, aging kinetics, and susceptibility to contamination. Some perform acceptably in limited conditions, beyond which the condensing water film becomes unstable, nonuniform, and scatters light or shows refractory distortions, both affecting the observed image. Quantifying the performance of Anti-Fog coated lenses is difficult: they may not show classical fogging and the existing testing methods, based on fog detection, are therefore inapplicable. The presented method for evaluating and quantifying AF properties is based on characterizing light scattering on lenses exposed to controlled humidity and temperature. Changes in intensity of laser light scattered at low angles (1, 2 4 and 8 degrees), observed during condensation of water on lenses, provide information on the swelling of Anti-Fog coatings, formation of uniform water film, going from an unstable to a steady state, and on the coalescence of discontinuous films. Real time observations/measurements allow for better understanding of factors controlling fogging and fog preventing phenomena. The method is especially useful in the development of new coatings for military-, sport-, and industrial protective eyewear as well as for medical and automotive applications. It allows for differentiating between coatings showing acceptable, good, and excellent performance.

  7. An evaluation of the whole effluent toxicity test method

    SciTech Connect

    Osteen, D.V.

    1999-12-17

    Whole effluent toxicity (WET) testing has become increasingly more important to the Environmental Protection Agency (EPA) and the States in the permitting of wastewater discharges from industry and municipalities. The primary purpose of the WET test is to protect aquatic life by predicting the effect of an effluent on the receiving stream. However, there are both scientific and regulatory concerns that using WET tests to regulate industrial effluents may result in either false positives and/or false negatives. In order to realistically predict the effect of an effluent on the receiving stream, the test should be as representative as possible of the conditions in the receiving stream. Studies (Rand and Petrocelli 1985) suggested several criteria for an ideal aquatic toxicity test organism, one of which is that the organism be indigenous to, or representative of, the ecosystem receiving the effluent. The other component needed in the development of a predictive test is the use of the receiving stream water or similar synthetic water as the control and dilution water in the test method. Use of an indigenous species and receiving water in the test should help reduce the variability in the method and allow the test to predict the effect of the effluent on the receiving stream. The experience with toxicity testing at the Savannah River Site (SRS) has yielded inconclusive data because of the inconsistency and unreliability of the results. The SRS contention is that the WET method in its present form does not adequately mimic actual biological/chemical conditions of the receiving streams and is neither reasonable nor accurate. This paper discusses the rationale for such a position by SRS on toxicity testing in terms of historical permitting requirements, outfall effluent test results, standard test method evaluation, scientific review of alternate test species, and concerns over the test method expressed by other organizations. This paper presents the Savannah River Site

  8. Two Preliminary SRTM DEMs Within the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Alsdorf, D.; Hess, L.; Melack, J.; Dunne, T.; Mertes, L.; Ballantine, A.; Biggs, T.; Holmes, K.; Sheng, Y.; Hendricks, G.

    2002-12-01

    Digital topography provides important measures, such as hillslope lengths and flow path networks, for understanding hydrologic and geomorphic processes (e.g., runoff response to land use change and floodplain inundation volume). Two preliminary Shuttle Radar Topography Mission digital elevation models of Manaus (1S to 5S and 59W to 63W) and Rondonia (9S to 12S and 61W to 64W) were received from NASA JPL in August 2002. The "PI Processor" produced these initial DEM segments and we are using them to assess the initial accuracy of the interferometrically derived heights and for hydrologic research. The preliminary SRTM derived absolute elevations across the Amazon floodplain in the Cabaliana region generally range from 5 to 15 m with reported errors of 1 to 3 m. This region also includes some preliminary elevations that are erroneously negative. However, topographic contours on 1:100,000 scale quadrangles of 1978 to 1980 vintage indicate elevations of 20 to 30 m. Because double-bounce travel paths are possible over the sparsely vegetated and very-flat 2400 sq-km water surface of the Balbina reservoir near Manaus, it serves to identify the relative accuracy of the SRTM heights. Here, cell-to-cell height changes are generally 0 to 1 m and changes across a ~100 km transect rarely exceed 3 m. Reported errors throughout the transect range from 1 to 2 m with some errors up to 5 m. Deforestation in Rondonia is remarkably clear in the C-band DEM where elevations are recorded from the canopy rather than bare earth. Here, elevation changes are ~30 m (with reported 1 to 2 m errors) across clear-cut areas. Field derived canopy heights are in agreement with this change. Presently, we are deriving stream networks in the Amazon floodplain for comparison with our previous network extraction from JERS-1 SAR mosaics and for hydrologic modeling.

  9. Influence of dem in Watershed Management as Flood Zonation Mapping

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Khan, Mudasir; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Despite of valuable efforts from working groups and research organizations towards flood hazard reduction through its program, still minimal diminution from these hazards has been realized. This is mainly due to the fact that with rapid increase in population and urbanization coupled with climate change, flood hazards are becoming increasingly catastrophic. Therefore there is a need to understand and access flood hazards and develop means to deal with it through proper preparations, and preventive measures. To achieve this aim, Geographical Information System (GIS), geospatial and hydrological models were used as tools to tackle with influence of flash floods in the Kingdom of Saudi Arabia due to existence of large valleys (Wadis) which is a matter of great concern. In this research paper, Digital Elevation Models (DEMs) of different resolution (30m, 20m,10m and 5m) have been used, which have proven to be valuable tool for the topographic parameterization of hydrological models which are the basis for any flood modelling process. The DEM was used as input for performing spatial analysis and obtaining derivative products and delineate watershed characteristics of the study area using ArcGIS desktop and its Arc Hydro extension tools to check comparability of different elevation models for flood Zonation mapping. The derived drainage patterns have been overlaid over aerial imagery of study area, to check influence of greater amount of precipitation which can turn into massive destructions. The flow accumulation maps derived provide zones of highest accumulation and possible flow directions. This approach provide simplified means of predicting extent of inundation during flood events for emergency action especially for large areas because of large coverage area of the remotely sensed data.

  10. Modeling Particle Rolling Behavior by the Modified Eccentric Circle Model of DEM

    NASA Astrophysics Data System (ADS)

    Chang, Yi-Long; Chen, Tsung-Hsien; Weng, Meng-Chia

    2012-09-01

    This study proposes a modified eccentric circle model to simulate the rolling resistance of circle particles through the distinct element method (DEM) simulation. The proposed model contains two major concepts: eccentric circle and local rotational damping. The mass center of a circular particle is first adjusted slightly for eccentricity to provide rotational stiffness. Local rotational damping is adopted to dissipate energy in the rotational direction. These associated material parameters can be obtained easily from the rolling behavior of one rod. This study verifies the proposed model with the repose angle tests of chalk rod assemblies, and the simulated results were satisfactory. Simulations using other existing models were also conducted for comparison, showing that the proposed model achieved better results. A landslide model test was further simulated, and this simulation agreed with both the failure pattern and the sliding process. In conclusion, particle rolling simulation using the proposed model appears to approach the actual particle trajectory, making it useful for various applications.

  11. Simulation of Hydraulic and Natural Fracture Interaction Using a Coupled DFN-DEM Model

    SciTech Connect

    J. Zhou; H. Huang; M. Deo

    2016-03-01

    The presence of natural fractures will usually result in a complex fracture network due to the interactions between hydraulic and natural fracture. The reactivation of natural fractures can generally provide additional flow paths from formation to wellbore which play a crucial role in improving the hydrocarbon recovery in these ultra-low permeability reservoir. Thus, accurate description of the geometry of discrete fractures and bedding is highly desired for accurate flow and production predictions. Compared to conventional continuum models that implicitly represent the discrete feature, Discrete Fracture Network (DFN) models could realistically model the connectivity of discontinuities at both reservoir scale and well scale. In this work, a new hybrid numerical model that couples Discrete Fracture Network (DFN) and Dual-Lattice Discrete Element Method (DL-DEM) is proposed to investigate the interaction between hydraulic fracture and natural fractures. Based on the proposed model, the effects of natural fracture orientation, density and injection properties on hydraulic-natural fractures interaction are investigated.

  12. Revealing topographic lineaments through IHS enhancement of DEM data. [Digital Elevation Model

    NASA Technical Reports Server (NTRS)

    Murdock, Gary

    1990-01-01

    Intensity-hue-saturation (IHS) processing of slope (dip), aspect (dip direction), and elevation to reveal subtle topographic lineaments which may not be obvious in the unprocessed data are used to enhance digital elevation model (DEM) data from northwestern Nevada. This IHS method of lineament identification was applied to a mosiac of 12 square degrees using a Cray Y-MP8/864. Square arrays from 3 x 3 to 31 x 31 points were tested as well as several different slope enhancements. When relatively few points are used to fit the plane, lineaments of various lengths are observed and a mechanism for lineament classification is described. An area encompassing the gold deposits of the Carlin trend and including the Rain in the southeast to Midas in the northwest is investigated in greater detail. The orientation and density of lineaments may be determined on the gently sloping pediment surface as well as in the more steeply sloping ranges.

  13. The mass balance record and surge behavior of Drangajökull Ice Cap (Iceland) from 1946 to 2011 deduced from aerial photographs and LiDAR DEM

    NASA Astrophysics Data System (ADS)

    Muñoz-Cobo Belart, Joaquín; Magnússon, Eyjólfur; Pálsson, Finnur

    2014-05-01

    High resolution and accuracy (e.g. based on LiDAR survey) Digital Elevation Models (DEMs) of glaciers and their close vicinity have significantly improved the methods for calculation of geodetic mass balance and study of changes in glacier dynamics. However additional data is needed to extend such studies back in time. Here we present a geodetically derived mass balance record for Drangajökull ice cap (NW-Iceland) since 1946 to present. The mass balance is calculated from a series of DEMs derived by photogrammetric processing of aerial photographs (years: 1946, 1975, 1985, 1994) and a LiDAR DEM (2011). All Ground Control Points (GCPs) used to constrain the orientation of the aerial photographs, used in the photogrammetric processing, are picked from the LiDAR derived DEM, thus eliminating the time consuming and expensive in situ survey of GCPs. The LiDAR DEM also helps to assess the accuracy of the photogrammetrically derived DEMs, by analyzing the residuals in elevation in ice-free areas. For the DEMs of 1975, 1985 and 1994 the Root Mean Square Error (RMSE) of the residuals is less than 2 m, whereas the accuracy of the DEM of 1946 is worse, with RMSE of 5.5 m, caused by the deteriorated images. The geodetic mass balance yields a negative specific mass balance of ~-0.5 m w.e.a-¹ for the period 1946-1975, followed by periods of positive mass balance: ~0.2 m w.e.a-¹ for the period 1975-1985 and ~0.3 m w.e.a-¹ for the period 1985-1994. Negative specific mass balance of ~-0.6 m w.e.a-¹ is derived for the period 1994-2011. High mass redistribution is observed during 1985-1994 and 1994-2011 on the three main outlets of the ice cap, related to surges. The derived orthophotographs allow tracking of stable features at individual locations on the northern part of Drangajökull, indicating an average velocity of 5-10 m a-¹ for the period 1946-1985 and speeding up in the last two periods due to a surge.

  14. Evaluation of nutria (Myocastor coypus) detection methods in Maryland, USA

    USGS Publications Warehouse

    Pepper, Margaret A; Herrmann, Valentine; Hines, James; Nichols, James; Kendrot, Stephen R

    2017-01-01

    Nutria (Myocaster coypus), invasive, semi-aquatic rodents native to South America, were introduced into Maryland near Blackwater National Wildlife Refuge (BNWR) in 1943. Irruptive population growth, expansion, and destructive feeding habits resulted in the destruction of thousands of acres of emergent marshes at and surrounding BNWR. In 2002, a partnership of federal, state and private entities initiated an eradication campaign to protect remaining wetlands from further damage and facilitate the restoration of coastal wetlands throughout the Chesapeake Bay region. Program staff removed nearly 14,000 nutria from five infested watersheds in a systematic trapping and hunting program between 2002 and 2014. As part of ongoing surveillance activities, the Chesapeake Bay Nutria Eradication Project uses a variety of tools to detect and remove nutria. Project staff developed a floating raft, or monitoring platform, to determine site occupancy. These platforms are placed along waterways and checked periodically for evidence of nutria visitation. We evaluated the effectiveness of monitoring platforms and three associated detection methods: hair snares, presence of scat, and trail cameras. Our objectives were to (1) determine if platform placement on land or water influenced nutria visitation rates, (2) determine if the presence of hair snares influenced visitation rates, and (3) determine method-specific detection probabilities. Our analyses indicated that platforms placed on land were 1.5–3.0 times more likely to be visited than those placed in water and that platforms without snares were an estimated 1.7–3.7 times more likely to be visited than those with snares. Although the presence of snares appears to have discouraged visitation, seasonal variation may confound interpretation of these results. Scat was the least effective method of determining nutria visitation, while hair snares were as effective as cameras. Estimated detection probabilities provided by occupancy

  15. An Adaptive Integration Model of Vector Polyline to DEM Data Based on Spherical Degeneration Quadtree Grids

    NASA Astrophysics Data System (ADS)

    Zhao, X. S.; Wang, J. J.; Yuan, Z. Y.; Gao, Y.

    2013-10-01

    Traditional geometry-based approach can maintain the characteristics of vector data. However, complex interpolation calculations limit its applications in high resolution and multi-source spatial data integration at spherical scale in digital earth systems. To overcome this deficiency, an adaptive integration model of vector polyline and spherical DEM is presented. Firstly, Degenerate Quadtree Grid (DQG) which is one of the partition models for global discrete grids, is selected as a basic framework for the adaptive integration model. Secondly, a novel shift algorithm is put forward based on DQG proximity search. The main idea of shift algorithm is that the vector node in a DQG cell moves to the cell corner-point when the displayed area of the cell is smaller or equal to a pixel of screen in order to find a new vector polyline approximate to the original one, which avoids lots of interpolation calculations and achieves seamless integration. Detailed operation steps are elaborated and the complexity of algorithm is analyzed. Thirdly, a prototype system has been developed by using VC++ language and OpenGL 3D API. ASTER GDEM data and DCW roads data sets of Jiangxi province in China are selected to evaluate the performance. The result shows that time consumption of shift algorithm decreased about 76% than that of geometry-based approach. Analysis on the mean shift error from different dimensions has been implemented. In the end, the conclusions and future works in the integration of vector data and DEM based on discrete global grids are also given.

  16. Biological Modeling As A Method for Data Evaluation and ...

    EPA Pesticide Factsheets

    Biological Models, evaluating consistency of data and integrating diverse data, examples of pharmacokinetics and response and pharmacodynamics Biological Models, evaluating consistency of data and integrating diverse data, examples of pharmacokinetics and response and pharmacodynamics

  17. Evaluation of hexavalent chromium extraction method EPA method 3060A for soils using XANES spectroscopy.

    PubMed

    Malherbe, Julien; Isaure, Marie-Pierre; Séby, Fabienne; Watson, Russell P; Rodriguez-Gonzalez, Pablo; Stutzman, Paul E; Davis, Clay W; Maurizio, Chiara; Unceta, Nora; Sieber, John R; Long, Stephen E; Donard, Olivier F X

    2011-12-15

    Hexavalent chromium (Cr(VI)) occurrence in soils is generally determined using an extraction step to transfer it to the liquid phase where it is more easily detected and quantified. In this work, the performance of the most common extraction procedure (EPA Method 3060A) using NaOH-Na(2)CO(3) solutions is evaluated using X-ray absorption near edge structure spectroscopy (XANES), which enables the quantification of Cr(VI) directly in the solid state. Results obtained with both methods were compared for three solid samples with different matrices: a soil containing chromite ore processing residue (COPR), a loamy soil, and a paint sludge. Results showed that Cr(VI) contents determined by the two methods differ significantly, and that the EPA Method 3060A procedure underestimated the Cr(VI) content in all studied samples. The underestimation is particularly pronounced for COPR. Low extraction yield for EPA Method 3060A was found to be the main reason. The Cr(VI) present in COPR was found to be more concentrated in magnetic phases. This work provides new XANES analyses of SRM 2701 and its extraction residues for the purpose of benchmarking EPA 3060A performance.

  18. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  19. Global Maps from Interferometeric TanDEM-X Data: Applications and Potentials

    NASA Astrophysics Data System (ADS)

    Rizzoli, Paola; Martone, Michele; Brautigam, Benjamin; Zink, Manfred

    2015-05-01

    TanDEM-X is a spaceborne Synthetic Aperture Radar (SAR) mission, whose goal is the generation of a global Digital Elevation Model (DEM) with unprecedented accuracy, by using interferometric SAR (InSAR) techniques (InSAR). TanDEM-X offers a huge global data set of bistatic InSAR acquisitions, each of them supplemented by quick look images of different SAR quantities, such as amplitude, coherence, and DEM. Global quick look mosaics of the interferometric coherence and of the relative height error can be considered for mission performance monitoring and acquisition strategy optimization. The aim of this paper is to present the use of such mosaics within the TanDEM-X mission and to show their potentials for future scientific applications for example in the fields of glaciology and forestry.

  20. A Comparative Study of Radar Stereo and Interferometry for DEM Generation

    NASA Astrophysics Data System (ADS)

    Gelautz, M.; Paillou, P.; Chen, C. W.; Zebker, H. A.

    2004-06-01

    In this experiment, we derive and compare radar stereo and interferometric elevation models (DEMs) of a study site in Djibouti, East Africa. As test data, we use a Radarsat stereo pair and ERS-2 and Radarsat interferometric data. Comparison of the reconstructed DEMs with a SPOT reference DEM shows that in regions of high coherence the DEMs produced by interferometry are of much better quality than the stereo result. However, the interferometric error histograms also show some pronounced outliers due to decorrelation and phase unwrapping problems on forested mountain slopes. The more robust stereo result is able to capture the general terrain shape, but finer surface details are lost. A fusion experiment demonstrates that merging the stereoscopic and interferometric DEMs by utilizing coherence- derived weights can significantly improve the accuracy of the computed elevation maps.

  1. Further Evaluation of Scaling Methods for Rotorcraft Icing

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Kreeger, Richard E.

    2012-01-01

    The paper will present experimental results from two recent icing tests in the NASA Glenn Icing Research Tunnel (IRT). The first test, conducted in February 2009, was to evaluate the current recommended scaling methods for fixed wing on representative rotor airfoils at fixed angle of attack. For this test, scaling was based on the modified Ruff method with scale velocity determined by constant Weber number and water film Weber number. Models were un-swept NACA 0012 wing sections. The reference model had a chord of 91.4 cm and scale model had a chord of 35.6 cm. Reference tests were conducted with velocity of 100 kt (52 m/s), droplet medium volume diameter (MVD) 195 m, and stagnation-point freezing fractions of 0.3 and 0.5 at angle of attack of 5deg and 7deg . It was shown that good ice shape scaling was achieved with constant Weber number for NACA 0012 airfoils with angle of attack up to 7deg . The second test, completed in May 2010, was primarily focused on obtaining transient and steady-state iced aerodynamics, ice accretion and shedding, and thermal icing validation data from an oscillating airfoil section over some selected ranges of icing conditions and blade assembly operational configurations. The model used was a 38.1-cm chord Sikorsky SC2110 airfoil section installed on an airfoil test apparatus with oscillating capability in the IRT. For two test conditions, size and condition scaling were performed. It was shown that good ice shape scaling was achieved for SC2110 airfoil at dynamic pitching motion. The data obtained will be applicable for future main rotor blade and tail rotor blade applications.

  2. Evaluation of aerial survey methods for Dall's sheep

    USGS Publications Warehouse

    Udevitz, Mark S.; Shults, Brad S.; Adams, Layne G.; Kleckner, Christopher

    2006-01-01

    Most Dall's sheep (Ovis dalli dalli) population-monitoring efforts use intensive aerial surveys with no attempt to estimate variance or adjust for potential sightability bias. We used radiocollared sheep to assess factors that could affect sightability of Dall's sheep in standard fixed-wing and helicopter surveys and to evaluate feasibility of methods that might account for sightability bias. Work was conducted in conjunction with annual aerial surveys of Dall's sheep in the western Baird Mountains, Alaska, USA, in 2000–2003. Overall sightability was relatively high compared with other aerial wildlife surveys, with 88% of the available, marked sheep detected in our fixed-wing surveys. Total counts from helicopter surveys were not consistently larger than counts from fixed-wing surveys of the same units, and detection probabilities did not differ for the 2 aircraft types. Our results suggest that total counts from helicopter surveys cannot be used to obtain reliable estimates of detection probabilities for fixed-wing surveys. Groups containing radiocollared sheep often changed in size and composition before they could be observed by a second crew in units that were double-surveyed. Double-observer methods that require determination of which groups were detected by each observer will be infeasible unless survey procedures can be modified so that groups remain more stable between observations. Mean group sizes increased during our study period, and our logistic regression sightability model indicated that detection probabilities increased with group size. Mark–resight estimates of annual population sizes were similar to sightability-model estimates, and confidence intervals overlapped broadly. We recommend the sightability-model approach as the most effective and feasible of the alternatives we considered for monitoring Dall's sheep populations.

  3. Evaluating Diversity Metrics: A Critique of the Equity Index Method

    ERIC Educational Resources Information Center

    Royal, Kenneth D.; Flammer, Keven

    2015-01-01

    Background: Evaluating diversity, inclusivity, and equity remains both a prevalent topic in education and a difficult challenge for most evaluators. Traditional metrics used to evaluate these constructs include questionnaires, focus groups, and anonymous comment solicitations. While each of these approaches offer value, they also possess a number…

  4. Method for Evaluating Germicidal Ultraviolet Inactivation of Biocontaminated Surfaces.

    PubMed

    Gorsuch, Emily L; Grinshpun, Sergey A; Willeke, Klaus; Reponen, Tiina; Moss, Clyde E; Jensen, Paul A

    1998-01-01

    Safety issues related to work-site conditions often deal with potential worker exposure to infectious airborne microorganisms due to their dissemination in indoor air and contamination of surfaces. Germicidal ultraviolet (GUV) radiation is used in health-care settings and other occupational environments for microbial inactivation. In this study, a new methodology for determining the efficiency of GUV microbial inactivation of surfaces was developed and evaluated. The method utilizes identical chambers in which test microorganisms are irradiated on agar surfaces at different humidity and irradiation intensity levels. The effects of GUV intensity and exposure time on microbial inactivation were examined for Micrococcus luteus and Serratia marcescens. It was found that at low humidity levels (20-25%) both organisms can be inactivated with at least 95% efficiency if the GUV intensity exceeds 50 μW/cm(2) for at least 3-5 min (corresponding to a dose of ~ 10 mJ/cm(2)). The radiation dose needed for effective inactivation of S. marcescens, as measured by a UV meter near the microbial sample, was found not to be affected by the humidity level, whereas that of M. luteus increased at higher humidities. The findings of this study can be used to determine sufficient GUV inactivation doses for occupational environments with various microbial contaminations.

  5. An acoustic method of automatically evaluating patient inhaler technique.

    PubMed

    Holmes, Martin S; D'Arcy, Shona; Costello, Richard W; Reilly, Richard B

    2013-01-01

    Chronic respiratory diseases such as asthma and chronic obstructive pulmonary disease (COPD) affect millions of people worldwide. Inhalers are devices utilized to deliver medication in small doses directly to the airways in the treatment of asthma and COPD. Despite the proven effectiveness of inhaler medication in controlling symptoms, many patients suffer from technique errors leading to decreased levels of medication efficacy. This study employs a recording device attached to a commonly used dry powder inhaler (DPI) to obtain the acoustic signals of patients taking their inhaler medication. The audio files provide information on how a patient uses their inhaler over a period of one month. Manually listening to such a large quantity of audio files would be a time consuming and monotonous process and therefore an algorithm that could automatically carry out this task would be of great benefit. An algorithm was thus designed and developed to detect inhalation, exhalation and blister events in the audio signals, analyze the quantity of each event, the order in which the events took place and finally provide a score on the overall performance. The algorithm was tested on a dataset of 185 audio files obtained from five community dwelling asthmatic patients in real world environments. Evaluation of the algorithm on this dataset revealed that it had an accuracy of 92.8% in deciding the correct technique score compared to manual detection methods.

  6. Efficacy methods to evaluate health communication and marketing campaigns.

    PubMed

    Evans, W Douglas; Uhrig, Jennifer; Davis, Kevin; McCormack, Lauren

    2009-06-01

    Communication and marketing are growing areas of health research, but relatively few rigorous efficacy studies have been conducted in these fields. In this article, we review recent health communication and marketing efficacy research, present two case studies that illustrate some of the considerations in making efficacy design choices, and advocate for greater emphasis on rigorous health communication and marketing efficacy research and the development of a research agenda. Much of the outcomes research in health communication and marketing, especially mass media, utilizes effectiveness designs conducted in real time, in the media markets or communities in which messages are delivered. Such evaluations may be impractical or impossible, however, imiting opportunities to advance the state of health communication and marketing research and the knowledge base on effective campaign strategies, messages, and channels. Efficacy and effectiveness studies use similar measures of behavior change. Efficacy studies, however, offer greater opportunities for experimental control, message exposure, and testing of health communication and marketing theory. By examining the literature and two in-depth case studies, we identify advantages and limitations to efficacy studies. We also identify considerations for when to adopt efficacy and effectiveness methods, alone or in combination. Finally, we outline a research agenda to investigate issues of internal and external validity, mode of message presentation, differences between marketing and message strategies, and behavioral outcomes.

  7. Enhancement of wind stress evaluation method under storm conditions

    NASA Astrophysics Data System (ADS)

    Chen, Yingjian; Yu, Xiping

    2016-12-01

    Wind stress is an important driving force for many meteorological and oceanographical processes. However, most of the existing methods for evaluation of the wind stress, including various bulk formulas in terms of the wind speed at a given height and formulas relating the roughness height of the sea surface with wind conditions, predict an ever-increasing tendency of the wind stress coefficient as the wind speed increases, which is inconsistent with the field observations under storm conditions. The wave boundary layer model, which is based on the momentum and energy conservation, has the advantage to take into account the physical details of the air-sea interaction process, but is still invalid under storm conditions without a modification. By including the energy dissipation due to the presence of sea spray, which is speculated to be an important aspect of the air-sea interaction under storm conditions, the wave boundary layer model is improved in this study. The improved model is employed to estimate the wind stress caused by an idealized tropical cyclone motion. The computational results show that the wind stress coefficient reaches its maximal value at a wind speed of about 40 m/s and decreases as the wind speed further increases. This is in fairly good agreement with the field data.

  8. Method of evaluating fluid loss in subsurface fracturing operations

    SciTech Connect

    Lee, W.S.; McMechan, D.E.; McDaniel, B.M.

    1993-08-31

    A method is described for evaluating characteristics of a subsurface formation fracturing program, comprising the steps of: pumping fluid into said formation for a first pumping time; shutting in said formation for a first shut-in time to establish a first set of pressure decline data; determining a fluid efficiency for said formation from said first pumping time and said first shut-in time; pumping fluid into said formation for a second pumping time to reopen said fracture; shutting in said formation for a second shut-in time to determine a second set of pressure decline data; determining a late time fluid loss coefficient in response to said second set of pressure decline data; determining an early time fluid loss coefficient in response to formation and fracturing fluid parameters; utilizing said determined early time fluid loss coefficient and said late time fluid loss coefficient to estimate a maximum spurt time; functionally relating said estimated spurt time to said determined early time fluid loss coefficient to estimate a spurt volume for said formation; and functionally relating said determined early time fluid loss coefficient and said established spurt time to said determined fluid efficiency in a balance relationship to establish a margin of error in said balance relationship; and iteratively changing said first-determined spurt time in response to said established margin of error, and interatively re-determining said spurt volume until a predetermined tolerance in said balance relationship is achieved.