Science.gov

Sample records for dems method evaluation

  1. Performance Evaluation of Four DEM-Based Fluvial Terrace Mapping Methods Across Variable Geomorphic Settings: Application to the Sheepscot River Watershed, Maine

    NASA Astrophysics Data System (ADS)

    Hopkins, A. J.; Snyder, N. P.

    2014-12-01

    Fluvial terraces are utilized in geomorphic studies as recorders of land-use, climate, and tectonic history. Advances in digital topographic data, such as high-resolution digital elevation models (DEMs) derived from airborne lidar surveys, has promoted the development of several methods used to extract terraces from DEMs based on their characteristic morphology. The post-glacial landscape of the Sheepscot River watershed, Maine, where strath and fill terraces are present and record Pleistocene deglaciation, Holocene eustatic forcing, and Anthropocene land-use change, was selected to implement a comparison between terrace mapping methodologies. At four study sites within the watershed, terraces were manually mapped to facilitate the comparison between fully and semi-automated DEM-based mapping procedures, including: (1) edge detection functions in Matlab, (2) feature classification algorithms developed by Wood (1996), (3) spatial relationships between interpreted terraces and surrounding topography (Walter et al., 2007), and (4) the TerEx terrace mapping toolbox developed by Stout and Belmont (2014). Each method was evaluated based on its accuracy and ease of implementation. The four study sites have varying longitudinal slope (0.1% - 5%), channel width (<5 m - 30 m), relief in surrounding landscape (15 m - 75 m), type and density of surrounding land use, and mapped surficial geologic units. In general, all methods overestimate terrace areas (average predicted area 136% of the manually defined area). Surrounding topographic relief appears to exert the greatest control on mapping accuracy, with the most accurate results (92% of terrace area mapped by Walter et al., 2007 method) achieved where the river valley was most confined by adjacent hillslopes. Accuracy decreased for study sites surrounded by a low-relief landscape, with the most accurate results achieved by the TerEx toolbox (Stout and Belmont, 2014; predicted areas were 45% and 89% of manual delineations

  2. Pre-Conditioning Optmization Methods and Display for Mega-Pixel DEM Reconstructions

    NASA Astrophysics Data System (ADS)

    Sette, A. L.; DeLuca, E. E.; Weber, M. A.; Golub, L.

    2004-05-01

    The Atmospheric Imaging Assembly (AIA) for the Solar Dynamics Observatory will provide an unprecedented rate of mega-pixel solar corona data. This hastens the need for faster differential emission measure (DEM) reconstruction methods, as well as scientifically useful ways of displaying this information for mega-pixel datasets. We investigate pre-conditioning methods, which optimize DEM reconstruction by making an informed initial DEM guess that takes advantage of the sharing of DEM information among the pixels in an image. In addition, we evaluate the effectiveness of different DEM image display options, including single temperature emission maps and time-progression DEM movies. This work is supported under contract SP02D4301R to the Lockheed Martin Corp.

  3. Gauss-Newton method for DEM co-registration

    NASA Astrophysics Data System (ADS)

    Wang, Kunlun; Zhang, Tonggang

    2015-12-01

    Digital elevation model (DEM) co-registration is one of the hottest research problems, and it is the critical technology for multi-temporal DEM analysis, which has wide potential application in many fields, such as geological hazards. Currently, the least-squares principle is used in most DEM co-registration methods, in which the matching parameters are obtained by iteration; the surface co-registration is then accomplished. To improve the iterative convergence rate, a Gauss-Newton method for DEM co-registration (G-N) is proposed in this paper. A gradient formula based on a gridded discrete surface is derived in theory, and then the difficulty of applying the Gauss-Newton method to DEM matching is solved. With the G-N algorithm, the surfaces approach each other along the maximal gradient direction, and therefore the iterative convergence and the performance efficiency of the new method can be enhanced greatly. According to experimental results based on the simulated datasets, the average convergence rates of rotation and translation parameters of the G-N algorithm are increased by 40 and 15% compared to those of the ICP algorithm, respectively. The performance efficiency of the G-N algorithm is 74.9% better.

  4. An improved method to represent DEM uncertainty in glacial lake outburst flood propagation using stochastic simulations

    NASA Astrophysics Data System (ADS)

    Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan

    2015-10-01

    Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.

  5. Constructing DEM from characteristic terrain information using HASM method

    NASA Astrophysics Data System (ADS)

    Song, Dunjiang; Yue, Tianxiang; Du, Zhengping; Wang, Qingguo

    2009-09-01

    In the construction of DEM, terrain features (e.g. valleys or stream lines, ridges, peaks, saddle points) are important for improving DEM accuracy and saw many applications in hydrology, precision agriculture, military trajectory planning, etc. HASM (High Accuracy Surface Modeling) is a method for surface modeling, which is based on the theory of surface. Presently, HASM is only used for scattered point's interpolation. So the work in this paper attempts to construct DEM based on the characteristic terrain information as stream lines and scattered points by HASM method. The course is described as the following steps. Firstly TIN (Triangulated Irregular Network) from the scattered points is generated. Secondly, each segment of the stream lines is well oriented to represent stream lines' flow direction, and a tree data structure (that has parent, children and brothers) is used to represent the whole stream lines' segments. A segment is a curve which does not intersect with other segments. A Water Course Flow (WCF) line is a set of segment lines connected piecewise but without overlapping or repetition, from the most upper reaches to the most lower reaches. From the stream lines' tree data structure, all the possible WCF lines are enumerated, and the start point and end point of each WCF lines is predicted from searching among the TIN. Thirdly, given a cell size, a 2-D matrix for the research region is built, and the values of the cells who were traversed by the stream lines by linear interpolation among each WCF lines. Fourthly, all the valued cells that were passed through by the stream line and that were from the scattered points are gathered as known scattered sampling points, and then HASM is used to construct the final DEM. A case study on the typical landform of plateau of China, KongTong gully of Dongzhi Plateau, Qingyang, Gausu province, is presented. The original data is manually vecterized from scanned maps 1:10,000, includes scattered points, stream lines

  6. Constructing DEM from characteristic terrain information using HASM method

    NASA Astrophysics Data System (ADS)

    Song, Dunjiang; Yue, Tianxiang; Du, Zhengping; Wang, Qingguo

    2010-11-01

    In the construction of DEM, terrain features (e.g. valleys or stream lines, ridges, peaks, saddle points) are important for improving DEM accuracy and saw many applications in hydrology, precision agriculture, military trajectory planning, etc. HASM (High Accuracy Surface Modeling) is a method for surface modeling, which is based on the theory of surface. Presently, HASM is only used for scattered point's interpolation. So the work in this paper attempts to construct DEM based on the characteristic terrain information as stream lines and scattered points by HASM method. The course is described as the following steps. Firstly TIN (Triangulated Irregular Network) from the scattered points is generated. Secondly, each segment of the stream lines is well oriented to represent stream lines' flow direction, and a tree data structure (that has parent, children and brothers) is used to represent the whole stream lines' segments. A segment is a curve which does not intersect with other segments. A Water Course Flow (WCF) line is a set of segment lines connected piecewise but without overlapping or repetition, from the most upper reaches to the most lower reaches. From the stream lines' tree data structure, all the possible WCF lines are enumerated, and the start point and end point of each WCF lines is predicted from searching among the TIN. Thirdly, given a cell size, a 2-D matrix for the research region is built, and the values of the cells who were traversed by the stream lines by linear interpolation among each WCF lines. Fourthly, all the valued cells that were passed through by the stream line and that were from the scattered points are gathered as known scattered sampling points, and then HASM is used to construct the final DEM. A case study on the typical landform of plateau of China, KongTong gully of Dongzhi Plateau, Qingyang, Gausu province, is presented. The original data is manually vecterized from scanned maps 1:10,000, includes scattered points, stream lines

  7. Laser Altimeter Evaluation of an SRTM DEM for Western Washington State

    NASA Astrophysics Data System (ADS)

    Carabajal, C. C.; Harding, D. J.

    2002-05-01

    Interferometric Synthetic Aperture Radar (InSAR) and laser altimeter measurements of topography provide complimentary approaches to characterize landforms. Results from the Shuttle Radar Topography Mission (SRTM) will provide an unprecedented, near-global, Digital Elevation Model (DEM) at 30 m resolution using a single pass C-band (5.6 cm wavelength) radar interferometer. In vegetated terrains, the C-band radar energy penetrates part way into vegetation cover. The elevation of the resulting radar phase center, somewhere between the canopy top and underlying ground, depends on the vegetation height, density, structure, and presence or absence of foliage. The high vertical accuracy and spatial resolution achieved by laser altimeters, and their capability to directly measure the vertical distribution of vegetation and underlying ground topography, provides a method to evaluate InSAR representations of topography. In order to provide an independent assessment of SRTM DEM accuracy and error characteristics, a simple but rigorous methodology based on comparisons to airborne and satellite laser altimeter profiles has been developed and tested. Initially, an SRTM DEM produced for a large part of western Washington State by the JPL PI processor has been compared to Shuttle Laser Altimeter (SLA) and airborne Scanning Lidar Imager of Canopies by Echo Recovery (SLICER) data. The accuracy of the laser altimeter data sets has been previously characterized. For SLICER profiles, each about 40 km long, the mean and standard deviation of elevation differences between the SRTM DEM and SLICER-defined canopy top and ground are computed. The SRTM DEM is usually located between the canopy top and ground. A poor correlation is observed between the per-pixel error estimate provided with the SRTM DEM and the observed SLICER to SRTM elevation differences. In addition to these profile comparisons, a very high resolution DEM acquired by Terrapoint, LLC for the Puget Sound Lidar Consortium

  8. Stochastic Discrete Equation Method (sDEM) for two-phase flows

    SciTech Connect

    Abgrall, R.; Congedo, P.M.; Geraci, G.; Rodio, M.G.

    2015-10-15

    A new scheme for the numerical approximation of a five-equation model taking into account Uncertainty Quantification (UQ) is presented. In particular, the Discrete Equation Method (DEM) for the discretization of the five-equation model is modified for including a formulation based on the adaptive Semi-Intrusive (aSI) scheme, thus yielding a new intrusive scheme (sDEM) for simulating stochastic two-phase flows. Some reference test-cases are performed in order to demonstrate the convergence properties and the efficiency of the overall scheme. The propagation of initial conditions uncertainties is evaluated in terms of mean and variance of several thermodynamic properties of the two phases.

  9. The Discrepancy Evaluation Model. II. The Application of the DEM to an Educational Program.

    ERIC Educational Resources Information Center

    Steinmetz, Andres

    1976-01-01

    The discrepancy evaluation model (DEM) specifies that evaluation consists of comparing performance with a standard, yielding discrepancy information. DEM is applied to programs in order to improve the program by making standards-performance-discrepancy cycles explicit and public. Action-oriented planning is involved in creating standards; a useful…

  10. A New DEM Generalization Method Based on Watershed and Tree Structure.

    PubMed

    Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang

    2016-01-01

    The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296

  11. A New DEM Generalization Method Based on Watershed and Tree Structure

    PubMed Central

    Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang

    2016-01-01

    The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296

  12. Evaluation of DEM generation accuracy from UAS imagery

    NASA Astrophysics Data System (ADS)

    Santise, M.; Fornari, M.; Forlani, G.; Roncella, R.

    2014-06-01

    The growing use of UAS platform for aerial photogrammetry comes with a new family of Computer Vision highly automated processing software expressly built to manage the peculiar characteristics of these blocks of images. It is of interest to photogrammetrist and professionals, therefore, to find out whether the image orientation and DSM generation methods implemented in such software are reliable and the DSMs and orthophotos are accurate. On a more general basis, it is interesting to figure out whether it is still worth applying the standard rules of aerial photogrammetry to the case of drones, achieving the same inner strength and the same accuracies as well. With such goals in mind, a test area has been set up at the University Campus in Parma. A large number of ground points has been measured on natural as well as signalized points, to provide a comprehensive test field, to check the accuracy performance of different UAS systems. In the test area, points both at ground-level and features on the buildings roofs were measured, in order to obtain a distributed support also altimetrically. Control points were set on different types of surfaces (buildings, asphalt, target, fields of grass and bumps); break lines, were also employed. The paper presents the results of a comparison between two different surveys for DEM (Digital Elevation Model) generation, performed at 70 m and 140 m flying height, using a Falcon 8 UAS.

  13. Development and Evaluation of Simple Measurement System Using the Oblique Photo and dem

    NASA Astrophysics Data System (ADS)

    Nonaka, H.; Sasaki, H.; Fujimaki, S.; Naruke, S.; Kishimoto, H.

    2016-06-01

    When a disaster occurs, we must grasp and evaluate its damage as soon as possible. Then we try to estimate them from some kind of photographs, such as surveillance camera imagery, satellite imagery, photographs taken from a helicopter and so on. Especially in initial stage, estimation of decent damage situation for a short time is more important than investigation of damage situation for a long time. One of the source of damage situation is the image taken by surveillance camera, satellite sensor and helicopter. If we can measure any targets in these imagery, we can estimate a length of a lava flow, a reach of a cinder and a sediment volume in volcanic eruption or landslide. Therefore in order to measure various information for a short time, we developed a simplified measurement system which uses these photographs. This system requires DEM in addition to photographs, but it is possible to use previously acquired DEM. To measure an object, we require only two steps. One is the determination of the position and the posture in which the photograph is shot. We determine these parameters using DEM. The other step is the measurement of an object in photograph. In this paper, we describe this system and show the experimental results to evaluate this system. In this experiment we measured the top of Mt. Usu by using two measurement method of this system. Then we can measure it about one hour and the difference between the measurement results and the airborne LiDAR data are less than 10 meter.

  14. A coupled DEM-CFD method for impulse wave modelling

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Utili, Stefano; Crosta, GiovanBattista

    2015-04-01

    Rockslides can be characterized by a rapid evolution, up to a possible transition into a rock avalanche, which can be associated with an almost instantaneous collapse and spreading. Different examples are available in the literature, but the Vajont rockslide is quite unique for its morphological and geological characteristics, as well as for the type of evolution and the availability of long term monitoring data. This study advocates the use of a DEM-CFD framework for the modelling of the generation of hydrodynamic waves due to the impact of a rapid moving rockslide or rock-debris avalanche. 3D DEM analyses in plane strain by a coupled DEM-CFD code were performed to simulate the rockslide from its onset to the impact with still water and the subsequent wave generation (Zhao et al., 2014). The physical response predicted is in broad agreement with the available observations. The numerical results are compared to those published in the literature and especially to Crosta et al. (2014). According to our results, the maximum computed run up amounts to ca. 120 m and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 m and 190 m respectively). In these simulations, the slope mass is considered permeable, such that the toe region of the slope can move submerged in the reservoir and the impulse water wave can also flow back into the slope mass. However, the upscaling of the grains size in the DEM model leads to an unrealistically high hydraulic conductivity of the model, such that only a small amount of water is splashed onto the northern bank of the Vajont valley. The use of high fluid viscosity and coarse grain model has shown the possibility to model more realistically both the slope and wave motions. However, more detailed slope and fluid properties, and the need for computational efficiency should be considered in future research work. This aspect has also been

  15. Evaluation of terrain datasets for LiDAR data thinning and DEM generation for watershed delineation applications

    NASA Astrophysics Data System (ADS)

    Olivera, F.; Ferreira, C.; Djokic, D.

    2010-12-01

    Watershed delineation based on Digital Elevation Models (DEM) is currently a standard practice in hydrologic studies. Efforts to develop DEMs of high resolution continues to take place, although the advantages of increasing the accuracy of the results are partially offset by the increased file size, difficulty to handle them, slow screen rendering and increase computational effort. Among these efforts, those based on the use of Light Detection and Ranging (LiDAR) pose the problem that interpolation techniques in commercially available GIS software packages (e.g., IDW, Spline, Kriging and TOPORASTER, among others) for developing DEMs from point elevations have difficulty processing large amounts of data. Terrain Dataset is an alternative format for storing topographic data that intelligently decimates data points and creates simplified, yet equally accurate for practical purposes, DEMs or Triangular Irregular Networks (TIN). This study uses terrain datasets to evaluate the impact that the thinning method (i.e., window size and z-value), pyramid level and the interpolation technique (linear or natural neighbor) used to create the DEMs have on the watersheds delineated from them. Two case studies were considered for assessing the effect of the different methods and techniques. One of them consisted of dendritic topography in Williamson Creek, Austin, Texas, and the other of deranged topography in Hillsborough County, Florida. The results were compared using three standardized error metrics that measure the accuracy of the watershed boundaries, and computational effort. For the Williamson creek (steeper terrain), point thinning during the terrain creation process or the interpolation method choice did not affect the watershed delineation; while, in the Hillsborough (flat terrain), the method for point thinning and interpolation techniques highly influenced the resulting watershed delineation.

  16. Research of the gas-solid flow character based on the DEM method

    NASA Astrophysics Data System (ADS)

    Wang, Xueyao; Xiao, Yunhan

    2011-12-01

    Numerical simulation of gas-solid flow behaviors in a rectangular fluidized bed is carried out three dimensionally by the discrete element method (DEM). Euler method and Lagrange method are employed to deal with the gas phase and solid phase respectively. The collided force among particles, striking force between particle and wall, drag force, gravity, Magnus lift force and Saffman lift force are considered when establishing the mathematic models. Soft-sphere model is used to describe the collision of particles. In addition, the Euler method is also used for modeling the solid phase to compare with the results of DEM. The flow patterns, particle mean velocities, particles' diffusion and pressure drop of the bed under typical operating conditions are obtained. The results show that the DEM method can describe the detailed information among particles, while the Euler-Euler method cannot capture the micro-scale character. No matter which method is used, the diffusion of particles increases with the increase of gas velocity. But the gathering and crushing of particles cannot be simulated, so the energy loss of particles' collision cannot be calculated and the diffusion by using the Euler-Euler method is larger. In addition, it is shown by DEM method, with strengthening of the carrying capacity, more and more particles can be schlepped upward and the dense suspension upflow pattern can be formed. However, the results given by the Euler-Euler method are not consistent with the real situation.

  17. Structural and Volumetric re-evaluation of the Vaiont landslide using DEM techniques

    NASA Astrophysics Data System (ADS)

    Superchi, Laura; Pedrazzini, Andrea; Floris, Mario; Genevois, Rinaldo; Ghirotti, Monica; Jaboyedoff, Michel

    2010-05-01

    On the 9th October 1963 a catastrophic landslide occurred on the southern slope of the Vaiont dam reservoir. A mass of approximately 270 million m3 collapsed into the reservoir generating a wave which overtopped the dam and hit the town of Longarone and other villages: almost 2000 people lost their lives. The large volume and high velocity of the landslide combined with the great destruction and loss of life that occurred make the Vaiont landslide as a natural laboratory to investigate landslide failure mechanisms and propagation. Geological, structural, geomorphological, hydrogeological and geomechanical elements should be, then, re-analyzed using methods and techniques not available in the '60s. In order to better quantify the volume involved in the movement and to assess the mechanism of the failure, a structural study is a preliminary and necessary step. The structural features have been investigated based on a digital elevation model (DEM) of the pre- and post-landslide topography at a pixel size of 5m and associated software (COLTOP-3D) to create a colored shaded relief map revealing the orientation of morphological features. Besides,the results allowed to identify on both pre- and post-slide surface six main discontinuity sets, some of which influence directly the Vaiont landslide morphology. Recent and old field surveys allowed to validate the COLTOP-3D analysis results. To estimate the location and shape of the sliding surface and to evaluate the volume of the landslide, the SLBL (Sloping Local Base Level) method has been used, a simple and efficient tool that allows a geometric interpretation of the failure surface based on a DEM. The SLBL application required a geological interpretation to define the contours of the landslide and to estimate the possible curvature of the sliding surface, that is defined by interpolating between points considered as limits of the landslide. The SLBL surface of the Vaiont landslide, was obtained from the DEM reconstruction

  18. DEM-based Watershed Delineation - Comparison of Different Methods and applications

    NASA Astrophysics Data System (ADS)

    Chu, X.; Zhang, J.; Tahmasebi Nasab, M.

    2015-12-01

    Digital elevation models (DEMs) are commonly used for large-scale watershed hydrologic and water quality modeling. With aid of the latest LiDAR technology, submeter scale DEM data are often available for many areas in the United States. Precise characterization of the detailed variations in surface microtopography using such high-resolution DEMs is crucial to the related watershed modeling. Various methods have been developed to delineate a watershed, including determination of flow directions and accumulations, identification of subbasin boundaries, and calculation of the relevant topographic parameters. The objective of this study is to examine different DEM-based watershed delineation methods by comparing their unique features and the discrepancies in their results. Not only does this study cover the traditional watershed delineation methods, but also a new puddle-based unit (PBU) delineation method. The specific topics and issues to be presented involve flow directions (D8 single flow direction vs. multi-direction methods), segmentation of stream channels, drainage systems (single "depressionless" drainage network vs. hierarchical depression-dominated drainage system), and hydrologic connectivity (static structural connectivity vs. dynamic functional connectivity). A variety of real topographic surfaces are selected and delineated by using the selected methods. Comparisons of their delineation results emphasize the importance of selection of the methods and highlight their applicability and potential impacts on watershed modeling.

  19. Discrete Element Method (DEM) Application to The Cone Penetration Test Using COUPi Model

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A. V.; Johnson, J.; Wilkinson, A.; DeGennaro, A. J.; Duvoy, P.

    2011-12-01

    The cone penetration test (CPT) is a soil strength measurement method to determine the tip resistance and sleeve friction versus depth while pushing a cone into regolith with controlled slow quasi-static speed. This test can also be used as an excellent tool to validate the discrete element method (DEM) model by comparing tip resistance and sleeve friction from experiments to model results. DEM by nature requires significant computational resources even for a limited number of particles. Thus, it is important to find particle and ensemble parameters that produce valuable results within reasonable computation times. The Controllable Objects Unbounded Particles Interaction (COUPi) model is a general physical DEM code being developed to model machine/regolith interactions as part of a NASA Lunar Science Institute sponsored project on excavation and mobility modeling. In this work, we consider how different particle shape and size distributions defined in the DEM influence the cone tip and friction sleeve resistance in a CPT DEM simulation. The results are compared to experiments with cone penetration in JSC-1A lunar regolith simulant. The particle shapes include spherical particles, particles composed from the union of three spheres, and some simple polyhedra. This focus is driven by the soil mechanics rule of thumb that particle size and shape distributions are the two most significant factors affecting soil strength. In addition to the particle properties, the packing configuration of an ensemble strongly affects soil strength. Bulk density of the regolith is an important characteristic that significantly influences the tip resistance and sleeve friction (Figure 1). We discuss different approaches used to control granular density in the DEM, including how to obtain higher bulk densities, using numerical "shaking" techniques and varying the friction coefficient during computations.

  20. A Review of Discrete Element Method (DEM) Particle Shapes and Size Distributions for Lunar Soil

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Metzger, Philip T.; Wilkinson, R. Allen

    2010-01-01

    As part of ongoing efforts to develop models of lunar soil mechanics, this report reviews two topics that are important to discrete element method (DEM) modeling the behavior of soils (such as lunar soils): (1) methods of modeling particle shapes and (2) analytical representations of particle size distribution. The choice of particle shape complexity is driven primarily by opposing tradeoffs with total number of particles, computer memory, and total simulation computer processing time. The choice is also dependent on available DEM software capabilities. For example, PFC2D/PFC3D and EDEM support clustering of spheres; MIMES incorporates superquadric particle shapes; and BLOKS3D provides polyhedra shapes. Most commercial and custom DEM software supports some type of complex particle shape beyond the standard sphere. Convex polyhedra, clusters of spheres and single parametric particle shapes such as the ellipsoid, polyellipsoid, and superquadric, are all motivated by the desire to introduce asymmetry into the particle shape, as well as edges and corners, in order to better simulate actual granular particle shapes and behavior. An empirical particle size distribution (PSD) formula is shown to fit desert sand data from Bagnold. Particle size data of JSC-1a obtained from a fine particle analyzer at the NASA Kennedy Space Center is also fitted to a similar empirical PSD function.

  1. Dem Extraction from WORLDVIEW-3 Stereo-Images and Accuracy Evaluation

    NASA Astrophysics Data System (ADS)

    Hu, F.; Gao, X. M.; Li, G. Y.; Li, M.

    2016-06-01

    This paper validates the potentials of Worldview-3 satellite images in large scale topographic mapping, by choosing Worldview-3 along-track stereo-images of Yi Mountain area in Shandong province China for DEM extraction and accuracy evaluation. Firstly, eighteen accurate and evenly-distributed GPS points are collected in field and used as GCPs/check points, the image points of which are accurately measured, and also tie points are extracted from image matching; then, the RFM-based block adjustment to compensate the systematic error in image orientation is carried out and the geo-positioning accuracy is calculated and analysed; next, for the two stereo-pairs of the block, DSMs are separately constructed and mosaicked as an entirety, and also the corresponding DEM is subsequently generated; finally, compared with the selected check points from high-precision airborne LiDAR point cloud covering the same test area, the accuracy of the generated DEM with 2-meter grid spacing is evaluated by the maximum (max.), minimum (min.), mean and standard deviation (std.) values of elevation biases. It is demonstrated that, for Worldview-3 stereo-images used in our research, the planimetric accuracy without GCPs is about 2.16 m (mean error) and 0.55 (std. error), which is superior to the nominal value, while the vertical accuracy is about -1.61 m (mean error) and 0.49 m (std. error); with a small amount of GCPs located in the center and four corners of the test area, the systematic error can be well compensated. The std. value of elevation biases between the generated DEM and the 7256 LiDAR check points are about 0.62 m. If considering the potential uncertainties in the image point measurement, stereo matching and also elevation editing, the accuracy of generating DEM from Worldview-3 stereo-images should be more desirable. Judging from the results, Worldview-3 has the potential for 1:5000 or even larger scale mapping application.

  2. Use of thermal infrared pictures for retrieving intertidal DEM by the waterline method: advantages and limitations

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Delacourt, C.; Allemand, P.

    2010-12-01

    Digital Elevation Models (DEM) of the intertidal zones have a growing interest for ecological and land development purposes. They are also a fundamental tool for monitoring current sedimentary movements in those low energy environments. Such DEMs have to be constructed with a centimetric resolution as the topographic changes are not predictable and as sediment displacements are weak. Direct construction of DEM by GPS in these muddy environment is difficult: photogrammetric techniques are not efficient on uniform coloured surfaces and terrestrial laser scans are difficult to stabilize on the mud, due to humidity. In this study, we propose to improve and to apply the waterline method to retrieve DEMs in intertidal zones. This technique is based on monitoring accurately the boundary between sand and water during a whole tide rise with thermal infrared images. The DEM is made by stacking all these lines calibrated by an immersed pressure sensor. Using thermal infrared pictures, instead of optical ones, improves the detection of the waterline, since mud and water have very different responses to sun heating and a large emissivity contrast. However, temperature retrieving from thermal infrared data is not trivial, since the luminance of an object is the sum of a radiative part and a reflexive part, whose relative proportions are given by the emissivity. In the following equation, B accounts for the equivalent blackbody luminance, and Linc is the incident luminance : Ltot}=L{rad}+L_{refl=ɛ B+(1-ɛ )Linc The infrared waterline technique has been used for the monitoring of a beach located on the Aber Benoit, 8.5km away from the open sea. The site is mainly constituted of mud, and waves are very small (less than one centimeter high), which are the ideal conditions for using the waterline method. A few measurements have been made to make differential heigh maps of sediments. We reached a mean resolution of 2cm and a vertical accuracy better than one centimeter. The results

  3. Combined DEM Extration Method from StereoSAR and InSAR

    NASA Astrophysics Data System (ADS)

    Zhao, Z.; Zhang, J. X.; Duan, M. Y.; Huang, G. M.; Yang, S. C.

    2015-06-01

    A pair of SAR images acquired from different positions can be used to generate digital elevation model (DEM). Two techniques exploiting this characteristic have been introduced: stereo SAR and interferometric SAR. They permit to recover the third dimension (topography) and, at the same time, to identify the absolute position (geolocation) of pixels included in the imaged area, thus allowing the generation of DEMs. In this paper, StereoSAR and InSAR combined adjustment model are constructed, and unify DEM extraction from InSAR and StereoSAR into the same coordinate system, and then improve three dimensional positioning accuracy of the target. We assume that there are four images 1, 2, 3 and 4. One pair of SAR images 1,2 meet the required conditions for InSAR technology, while the other pair of SAR images 3,4 can form stereo image pairs. The phase model is based on InSAR rigorous imaging geometric model. The master image 1 and the slave image 2 will be used in InSAR processing, but the slave image 2 is only used in the course of establishment, and the pixels of the slave image 2 are relevant to the corresponding pixels of the master image 1 through image coregistration coefficient, and it calculates the corresponding phase. It doesn't require the slave image in the construction of the phase model. In Range-Doppler (RD) model, the range equation and Doppler equation are a function of target geolocation, while in the phase equation, the phase is also a function of target geolocation. We exploit combined adjustment model to deviation of target geolocation, thus the problem of target solution is changed to solve three unkonwns through seven equations. The model was tested for DEM extraction under spaceborne InSAR and StereoSAR data and compared with InSAR and StereoSAR methods respectively. The results showed that the model delivered a better performance on experimental imagery and can be used for DEM extraction applications.

  4. "blue Line" Conditioning of A New Physically-based Method For Dem Interpolation

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Teles, V.; Bras, R. L.

    A basic issue for Earth Science studies is the accurate representation of the topogra- phy. The increase of distributed modeling has stimulated the use of Digital Elevation Models (DEMs). Usually DEMs are created with mathematical and statistical interpo- lators that seek the best fit to the observed data. They generally create surfaces with unrealistic normal or log-normal distribution of slopes and curvatures. Landscape to- pography is known to have particular properties such as roughness and self-similar properties. The commonly used interpolators do not reproduce those characteristics. In order to build a topography with characteristic properties, a new physically-based approach was recently developed. This method consists in coupling a standard interpo- lation method and an erosion model, which constrains the interpolation with geomor- phologic laws. This new approach gives more realistic surfaces than the commonly- used interpolators. However, for some basins or when data are scarce, this method as well as common interpolators are not able to recognize the main channels and ridges. In these cases, the interpolated surface has a wrong drainage network and completely different landscape. In order to overcome this difficulty and to improve the physically- based model performance, a new procedure is able to use the "blue line" information to constrain the interpolated surface with the actual network. This information can be easily and accurately obtained from maps or aerial photographs, because only the planar coordinates of the network are needed as input. The steps of the "blue line" procedure are described. Some case studies show the improvement due to the "blue line" information.

  5. High-resolution Pleiades DEMs and improved mapping methods for the E-Corinth marine terraces

    NASA Astrophysics Data System (ADS)

    de Gelder, Giovanni; Fernández-Blanco, David; Delorme, Arthur; Jara-Muñoz, Julius; Melnick, Daniel; Lacassin, Robin; Armijo, Rolando

    2016-04-01

    The newest generation of satellite imagery provides exciting new possibilities for highly detailed mapping, with ground resolution of sub-metric pixels and absolute accuracy within a few meters. This opens new venues for the analysis of geologic and geomorphic landscape features, especially since photogrammetric methods allow the extraction of detailed topographic information from these satellite images. We used tri-stereo imagery from the Pleiades platform of the CNES in combination with Euclidium software for image orientation, and Micmac software for dense matching, to develop state-of-the-art, 2m-resolution digital elevation models (DEMs) for eight areas in Greece. Here, we present our mapping results for an area in the eastern Gulf of Corinth, which contains one of the most extensive and well-preserved flights of marine terraces world-wide. The spatial extent of the terraces has been determined by an iterative combination of an automated surface classification model for terrain slope and roughness, and qualitative assessment of satellite imagery, DEM hillshade maps, slope maps, as well as detailed topographic analyses of profiles and contours. We determined marine terrace shoreline angles by means of swath profiles that run perpendicularly to the paleo-seacliffs, using the graphical interface TerraceM. Our analysis provided us with a minimum and maximum estimate of the paleoshoreline location on ~750 swath profiles, by using the present-day cliff slope as an approximation for its paleo-cliff counterpart. After correlating the marine terraces laterally we obtained 16 different terrace-levels, recording Quaternary sea-level highstands of both major interglacial and several interstadial periods. Our high-resolution Pleiades-DEMs and improved method for paleoshoreline determination allowed us to produce a marine terrace map of unprecedented detail, containing more terrace sub-levels than hitherto. Our mapping demonstrates that we are no longer limited by the

  6. Sensitivity of watershed attributes to spatial resolution and interpolation method of LiDAR DEMs in three distinct landscapes

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.; Jamieson, R.; Sterling, S.

    2014-03-01

    This study investigates scaling relationships of watershed area and stream networks delineated from LiDAR DEMs. The delineations are tested against spatial resolution, including 1, 5, 10, 25, and 50 m, and interpolation method, including Inverse Distance Weighting (IDW), Moving Average (MA), Universal Kriging (UK), Natural Neighbor (NN), and Triangular Irregular Networks (TIN). Study sites include Mosquito Creek, Scotty Creek, and Thomas Brook, representing landscapes with high, low, and moderate change in elevation, respectively. Results show scale-dependent irregularities in watershed area due to spatial resolution at Thomas Brook and Mosquito Creek. The highest sensitivity of watershed area to spatial resolution occurred at Scotty Creek, due to high incidence of LiDAR sensor measurement error and subtle changes in elevation. Length of drainage networks did not show a scaling relationship with spatial resolution, due to algorithmic complications of the stream initiation threshold. Stream lengths of main channels at Thomas Brook and Mosquito Creek displayed systematic increases in length with increasing spatial resolution, described through an average fractal dimension of 1.059. The scaling relationship between stream length and DEM resolution allows estimation of stream lengths from low-resolution DEMs in the absence of high-resolution DEMs. Single stream validation at Thomas Brook showed the 1 m DEM produced the lowest length error and highest spatial accuracy, at 3.7% and 71.3%, respectively. Single stream validation at Mosquito Creek showed the 25 m DEM produced the lowest length error, and the 1 m DEM the highest spatial accuracy, at 0.6% and 61.0%, respectively.

  7. Evaluating the influence of spatial resolutions of DEM on watershed runoff and sediment yield using SWAT

    NASA Astrophysics Data System (ADS)

    Reddy, A. Sivasena; Reddy, M. Janga

    2015-10-01

    Digital elevation model (DEM) of a watershed forms key basis for hydrologic modelling and its resolution plays a key role in accurate prediction of various hydrological processes. This study appraises the effect of different DEMs with varied spatial resolutions (namely TOPO 20 m, CARTO 30 m, ASTER 30 m, SRTM 90 m, GEO-AUS 500 m and USGS 1000 m) on hydrological response of watershed using Soil and Water Assessment Tool (SWAT) and applied for a case study of Kaddam watershed in India for estimating runoff and sediment yield. From the results of case study, it was observed that reach lengths, reach slopes, minimum and maximum elevations, sub-watershed areas, land use mapping areas within the sub-watershed and number of HRUs varied substantially due to DEM resolutions, and consequently resulted in a considerable variability in estimated daily runoff and sediment yields. It was also observed that, daily runoff values have increased (decreased) on low (high) rainy days respectively with coarser resolution of DEM. The daily sediment yield values from each sub-watershed decreased with coarser resolution of the DEM. The study found that the performance of SWAT model prediction was not influenced much for finer resolution DEMs up to 90 m for estimation of runoff, but it certainly influenced the estimation of sediment yields. The DEMs of TOPO 20 m and CARTO 30 m provided better estimates of sub-watershed areas, runoff and sediment yield values over other DEMs.

  8. A practical method for SRTM DEM correction over vegetated mountain areas

    NASA Astrophysics Data System (ADS)

    Su, Yanjun; Guo, Qinghua

    2014-01-01

    Digital elevation models (DEMs) are essential to various applications in topography, geomorphology, hydrology, and ecology. The Shuttle Radar Topographic Mission (SRTM) DEM data set is one of the most complete and most widely used DEM data sets; it provides accurate information on elevations over bare land areas. However, the accuracy of SRTM data over vegetated mountain areas is relatively low as a result of the high relief and the penetration limitation of the C-band used for obtaining global DEM products. The objective of this study is to assess the performance of SRTM DEMs and correct them over vegetated mountain areas with small-footprint airborne Light Detection and Ranging (Lidar) data, which can develop elevation products and vegetation products [e.g., vegetation height, Leaf Area Index (LAI)] of high accuracy. The assessing results show that SRTM elevations are systematically higher than those of the actual land surfaces over vegetated mountain areas. The mean difference between SRTM DEM and Lidar DEM increases with vegetation height, whereas the standard deviation of the difference increases with slope. To improve the accuracy of SRTM DEM over vegetated mountain areas, a regression model between the SRTM elevation bias and vegetation height, LAI, and slope was developed based on one control site. Without changing any coefficients, this model was proved to be applicable in all the nine study sites, which have various topography and vegetation conditions. The mean bias of the corrected SRTM DEM at the nine study sites using this model (absolute value) is 89% smaller than that of the original SRTM DEM, and the standard deviation of the corrected SRTM elevation bias is 11% smaller.

  9. Production of Optimized DEM Using IDW Interpolation Method (Case Study; Jam and Riz Basin-Assaloyeh)

    NASA Astrophysics Data System (ADS)

    Soleimani, K.; Modallaldoust, S.

    In this research, preparing the optimized Digital Elevation Model (DEM)of Jam and Riz basin was studied by use of Inverse Distance Weighting (IDW) and utilization of GIS technique. Performing of IDW method depends on several factors including cell size, number of neighbor`s points, point searching radius and optimized power. On this basis, two Geostatistical methods were used for determination of points searching radius of standard ellipse and standard deviation ellipse. Considering the fixed cell size in network with value of 3 which represents weighting degree of points and with determining the rotation angle and measure of axis of standard deviation ellipse and calculation of optimized radius in standard ellipse by use of statistical method, then optimized power was automatically derived in ArcGIS 9.2 environment. In this method the number of neighbor's points was selected with four repetition points of 3, 5, 7 and 15. However, 8 digital elevation models were gained after the mentioned processes. Finally, digital elevation models of 1 to 8 were compared with control points using compare means test in SPSS11.5 statistical software which shown the IDW-3 with the best conditions recommended as the optimized model. Although the results are showing a similar forms but from them IDW3 model has the lowest mean standard error of 0.26842 which is used seven neighbor points.

  10. Evaluation of TanDEM-X elevation data for geomorphological mapping and interpretation in high mountain environments - A case study from SE Tibet, China

    NASA Astrophysics Data System (ADS)

    Pipaud, Isabel; Loibl, David; Lehmkuhl, Frank

    2015-10-01

    Digital elevation models (DEMs) are a prerequisite for many different applications in the field of geomorphology. In this context, the two near-global medium resolution DEMs originating from the SRTM and ASTER missions are widely used. For detailed geomorphological studies, particularly in high mountain environments, these datasets are, however, known to have substantial disadvantages beyond their posting, i.e., data gaps and miscellaneous artifacts. The upcoming TanDEM-X DEM is a promising candidate to improve this situation by application of state-of-the-art radar technology, exhibiting a posting of 12 m and less proneness to errors. In this study, we present a DEM processed from a single TanDEM-X CoSSC scene, covering a study area in the extreme relief of the eastern Nyainqêntanglha Range, southeastern Tibet. The potential of the resulting experimental TanDEM-X DEM for geomorphological applications was evaluated by geomorphometric analyses and an assessment of landform cognoscibility and artifacts in comparison to the ASTER GDEM and the recently released SRTM 1″ DEM. Detailed geomorphological mapping was conducted for four selected core study areas in a manual approach, based exclusively on the TanDEM-X DEM and its basic derivates. The results show that the self-processed TanDEM-X DEM yields a detailed and widely consistent landscape representation. It thus fosters geomorphological analysis by visual and quantitative means, allowing delineation of landforms down to footprints of ~ 30 m. Even in this premature state, the TanDEM-X elevation data are widely superior to the ASTER and SRTM datasets, primarily owing to its significantly higher resolution and its lower susceptibility to artifacts that hamper landform interpretation. Conversely, challenges toward interferometric DEM generation were identified, including (i) triangulation facets and missing topographic information resulting from radar layover on steep slopes facing toward the radar sensor, (ii) low

  11. Development of a coupled discrete element (DEM)-smoothed particle hydrodynamics (SPH) simulation method for polyhedral particles

    NASA Astrophysics Data System (ADS)

    Nassauer, Benjamin; Liedke, Thomas; Kuna, Meinhard

    2016-03-01

    In the present paper, the direct coupling of a discrete element method (DEM) with polyhedral particles and smoothed particle hydrodynamics (SPH) is presented. The two simulation techniques are fully coupled in both ways through interaction forces between the solid DEM particles and the fluid SPH particles. Thus this simulation method provides the possibility to simulate the individual movement of polyhedral, sharp-edged particles as well as the flow field around these particles in fluid-saturated granular matter which occurs in many technical processes e.g. wire sawing, grinding or lapping. The coupled method is exemplified and validated by the simulation of a particle in a shear flow, which shows good agreement with analytical solutions.

  12. Shoreline Mapping with Integrated HSI-DEM using Active Contour Method

    NASA Astrophysics Data System (ADS)

    Sukcharoenpong, Anuchit

    Shoreline mapping has been a critical task for federal/state agencies and coastal communities. It supports important applications such as nautical charting, coastal zone management, and legal boundary determination. Current attempts to incorporate data from hyperspectral imagery to increase the efficiency and efficacy of shoreline mapping have been limited due to the complexity in processing its data as well as its inferior spatial resolution when compared to multispectral imagery or to sensors such as LiDAR. As advancements in remote-sensing technologies increase sensor capabilities, the ability to exploit the spectral formation carried in hyperspectral images becomes more imperative. This work employs a new approach to extracting shorelines from AVIRIS hyperspectral images by combination with a LiDAR-based DEM using a multiphase active contour segmentation technique. Several techniques, such as study of object spectra and knowledge-based segmentation for initial contour generation, have been employed in order to achieve a sub-pixel level of accuracy and maintain low computational expenses. Introducing a DEM into hyperspectral image segmentation proves to be a useful tool to eliminate misclassifications and improve shoreline positional accuracy. Experimental results show that mapping shorelines from hyperspectral imagery and a DEM can be a promising approach as many further applications can be developed to exploit the rich information found in hyperspectral imagery.

  13. A Screening Method for Flash Flooding Risk using Instantaneous Unit Hydrographs Derived from High Resolution DEM data

    NASA Astrophysics Data System (ADS)

    Senevirathne, Nalin; Willgoose, Garry

    2015-04-01

    Flash flooding is considered a severe natural hazard and has had significant impact on human and infrastructure throughout the history. Modelling techniques and the understanding of flash flooding are getting improved with the availability of better quality data such as high resolution Digital Elevation Models (DEM). DEMs allow the automated characterization of the influence of geomorphology on the hydrologic response of catchments. They are particularly useful for small ungauged catchments where available hydrologic data (e.g. rainfall, runoff) are sparse and where site specific studies are rarely done unless some evidence of high risk is available. In this paper, we present new risk indicators, derived directly from instantaneous unit hydrographs (IUH), which can be used to identify flash flooding risk areas within catchments. The study area includes 35 major river basins covering a 1700km long by 50km wide coastal strip of Eastern Australia. Standard terrain analysis methods (pit filling, flow direction, local slope, contributing area, flow velocity and travel time) were used to produce IUHs for every pixel in the study area using a high resolution (1 arc second) DEM. When computing the IUHs, each pixel was considered as the outlet of its own catchment bounded by its contributing area. This allows us to characterise the hydrological response at the finest scale possible for a DEM. Risk indicators related to rate of storm rise and catchment lag time were derived from IUHs. Flash flood risk maps were produced at the catchment scale and they are match well with the data of severe flash flooding that occurred around Toowoomba (at the northern end of the coastal strip studied) in January 2011.

  14. 2D Distinct Element Method (DEM) models of the initiation, propagation and saturation of rock joints

    NASA Astrophysics Data System (ADS)

    Arslan, A.; Schöpfer, M. P.; Walsh, J. J.; Childs, C.

    2009-12-01

    In layered sequences, rock joints usually best develop within the more brittle layers and commonly display a regular spacing that scales with layer thickness. A variety of conceptual and mechanical models have been developed for these observations. A limitation of previous approaches, however, is that fracture initiation and associated interface slip are not explicitly simulated; instead, fractures were predefined and interfaces were welded. To surmount this problem, we have modelled the formation and growth of joints in layered sequences by using the two-dimensional Distinct Element Method (DEM) as implemented in the Particle Flow Code (PFC-2D). In PFC-2D, rock is represented by an assemblage of circular particles that are bonded at particle-particle contacts. Failure occurs if either the tensile or shear strength of a bond is exceeded. The models comprise a central brittle layer with high Young’s modulus, which is embedded in a low Young’s modulus matrix. The interfaces between the layers are defined by ‘smooth joint’ contacts, a modelling feature that eliminates interparticle bumpiness and associated interlocking friction. Consequently, this feature allows the user to assign macroscopic properties such as friction and cohesion along layer interfaces in a controlled manner. Layer parallel extension is applied by assigning a velocity to particles at the lateral boundaries of the model while maintaining a constant vertical confining pressure. Models were extended until joint saturation in the central layer was reached. We thereby explored the impact of confining pressure and interface properties (friction, cohesion) on joint spacing. A number of important conclusions can be drawn from our models: (i) The distributions of average horizontal normal stress within the layer and of shear stress at the interface are consistent with analytical solutions (stress-transfer theory). (ii) At low interfacial shear strength, new joints form preferentially midway between

  15. A method of detecting land use change of remote sensing images based on texture features and DEM

    NASA Astrophysics Data System (ADS)

    Huang, Dong-ming; Wei, Chun-tao; Yu, Jun-chen; Wang, Jian-lin

    2015-12-01

    In this paper, a combination method, between the neural network and textures information, is proposed to remote sensing images classification. The methodology involves an extraction of texture features using the gray level co-occurrence matrix and image classification with BP artificial neural network. The combination of texture features and the digital elevation model (DEM) as classified bands to neural network were used to recognized different classes. This scheme shows high recognition accuracy in the classification of remote sensing images. In the experiments, the proposed method was successfully applied to remote sensing image classification and Land Use Change Detection, in the meanwhile, the effectiveness of the proposed method was verified.

  16. High-resolution DEMs in the study of rainfall- and earthquake-induced landslides: Use of a variable window size method in digital terrain analysis

    NASA Astrophysics Data System (ADS)

    Iwahashi, Junko; Kamiya, Izumi; Yamagishi, Hiromitsu

    2012-06-01

    We undertake digital terrain analyses of rainfall- and earthquake-induced landslides in Japan, using high-resolution orthoimagery and Light Detection and Ranging (LiDAR) DEMs. Our aims are twofold: to demonstrate an effective method for dealing with high-resolution DEMs, which are often too detailed for landslide assessments, and to evaluate the topographic differences between rainfall- and earthquake-induced landslides. The study areas include the Izumozaki (1961 and 2004 heavy rainfalls), Niihama (2004 heavy rainfalls), Houfu (2009 heavy rainfalls), and Hanokidachi/Kurikoma-dam regions (the 2008 M 7.2 Iwate-Miyagi Nairiku earthquake). The study areas include 7,106 landslides in these five regions. We use two topographic attributes (the slope gradient and the Laplacian) calculated from DEMs in varying window sizes. The hit rates for statistical prediction of landslide cells through discriminant analyses are calculated using the two topographic attributes as explanatory variables, and the landslide inventory data as the dependent variable. In cases of surface failure, the hit rates are found to diminish when the window size of the topographic attributes is too large or too small, indicating that an optimal scale factor is key in assessing shallow landslides. The representative window sizes are approximately 30 m for shallow landslides; the optimal window size may be directly related to the average size of landslides in each region. We also find a stark contrast between rainfall- and earthquake-induced landslides. Rainfall-induced landslides are always most common at a slope gradient of 30°, but the frequency of earthquake-induced landslides increases exponentially with slope gradient. We find that the Laplacian, i.e., the attributes of surface convexity and concavity, and the slope gradient are both important factors for rainfall-induced landslides, whereas earthquake-induced landslides are influenced mainly by slope steepness.

  17. Effective Thermal Property Estimation of Unitary Pebble Beds Based on a CFD-DEM Coupled Method for a Fusion Blanket

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Chen, Youhua; Huang, Kai; Liu, Songlin

    2015-12-01

    Lithium ceramic pebble beds have been considered in the solid blanket design for fusion reactors. To characterize the fusion solid blanket thermal performance, studies of the effective thermal properties, i.e. the effective thermal conductivity and heat transfer coefficient, of the pebble beds are necessary. In this paper, a 3D computational fluid dynamics discrete element method (CFD-DEM) coupled numerical model was proposed to simulate heat transfer and thereby estimate the effective thermal properties. The DEM was applied to produce a geometric topology of a prototypical blanket pebble bed by directly simulating the contact state of each individual particle using basic interaction laws. Based on this geometric topology, a CFD model was built to analyze the temperature distribution and obtain the effective thermal properties. The current numerical model was shown to be in good agreement with the existing experimental data for effective thermal conductivity available in the literature. supported by National Special Project for Magnetic Confined Nuclear Fusion Energy of China (Nos. 2013GB108004, 2015GB108002, 2014GB122000 and 2014GB119000), and National Natural Science Foundation of China (No. 11175207)

  18. (Yet) A New Method for the Determination of Flow Directions and Contributing Areas over Gridded DEMs

    NASA Astrophysics Data System (ADS)

    Shelef, E.; Hilley, G. E.

    2012-12-01

    over-dispersive routing schemes. We used these, as well as complimentary methods that prescribe discharge to specific cells to evaluate these flow routing schemes. Additionally, we compared results from the different methods to ALSM topography to evaluate areas of the landscape where the results of different flow-routing methods diverged. Finally, we evolved modeled topography using each of these methods and found substantive differences in the structure of the modeled topography depending on the flow routing scheme used. None of these methods directly address the physics of flow over the surface and as such are ad hoc representations of surface flow. As such, it is difficult to determine which of these methods is most appropriate without benchmarking to more physically-based (and computationally expensive) flow-routing models. Nonetheless, the method proposed accomplishes the necessary dispersion of flow over a topographic surface while maintaining consistency with the gridded coordinate system, and thus is preferable from this standpoint.

  19. A hierarchical pyramid method for managing large-scale high-resolution drainage networks extracted from DEM

    NASA Astrophysics Data System (ADS)

    Bai, Rui; Tiejian, Li; Huang, Yuefei; Jiaye, Li; Wang, Guangqian; Yin, Dongqin

    2015-12-01

    The increasing resolution of Digital Elevation Models (DEMs) and the development of drainage network extraction algorithms make it possible to develop high-resolution drainage networks for large river basins. These vector networks contain massive numbers of river reaches with associated geographical features, including topological connections and topographical parameters. These features create challenges for efficient map display and data management. Of particular interest are the requirements of data management for multi-scale hydrological simulations using multi-resolution river networks. In this paper, a hierarchical pyramid method is proposed, which generates coarsened vector drainage networks from the originals iteratively. The method is based on the Horton-Strahler's (H-S) order schema. At each coarsening step, the river reaches with the lowest H-S order are pruned, and their related sub-basins are merged. At the same time, the topological connections and topographical parameters of each coarsened drainage network are inherited from the former level using formulas that are presented in this study. The method was applied to the original drainage networks of a watershed in the Huangfuchuan River basin extracted from a 1-m-resolution airborne LiDAR DEM and applied to the full Yangtze River basin in China, which was extracted from a 30-m-resolution ASTER GDEM. In addition, a map-display and parameter-query web service was published for the Mississippi River basin, and its data were extracted from the 30-m-resolution ASTER GDEM. The results presented in this study indicate that the developed method can effectively manage and display massive amounts of drainage network data and can facilitate multi-scale hydrological simulations.

  20. ASTER DEM performance

    USGS Publications Warehouse

    Fujisada, H.; Bailey, G.B.; Kelly, Glen G.; Hara, S.; Abrams, M.J.

    2005-01-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the National Aeronautics and Space Administration's Terra spacecraft has an along-track stereoscopic capability using its a near-infrared spectral band to acquire the stereo data. ASTER has two telescopes, one for nadir-viewing and another for backward-viewing, with a base-to-height ratio of 0.6. The spatial resolution is 15 m in the horizontal plane. Parameters such as the line-of-sight vectors and the pointing axis were adjusted during the initial operation period to generate Level-1 data products with a high-quality stereo system performance. The evaluation of the digital elevation model (DEM) data was carried out both by Japanese and U.S. science teams separately using different DEM generation software and reference databases. The vertical accuracy of the DEM data generated from the Level-1A data is 20 m with 95% confidence without ground control point (GCP) correction for individual scenes. Geolocation accuracy that is important for the DEM datasets is better than 50 m. This appears to be limited by the spacecraft position accuracy. In addition, a slight increase in accuracy is observed by using GCPs to generate the stereo data. ?? 2005 IEEE.

  1. Evaluation of morphometric parameters of drainage networks derived from topographic maps and DEM in point of floods

    NASA Astrophysics Data System (ADS)

    Ozdemir, Hasan; Bird, Deanne

    2009-02-01

    An evaluation of morphometric parameters of two drainage networks derived from different sources was done to determine the influence of sub-basins to flooding on the main channel in the Havran River basin (Balıkesir-Turkey). Drainage networks for the sub-basins were derived from both topographic maps scaled 1:25.000 and a 10-m resolution digital elevation model (DEM) using geographic information systems (GIS). Blue lines, representing fluvial channels on the topographic maps were accepted as a drainage network, which does not depict all exterior links in the basin. The second drainage network was extracted from the DEM using minimum accumulation area threshold to include all exterior links. Morphometric parameters were applied to the two types of drainage networks at sub-basin levels. These parameters were used to assess the influence of the sub-basins on the main channel with respect to flooding. The results show that the drainage network of sub-basin 4—where a dam was constructed on its outlet to mitigate potential floods—has a lower influence morphometrically to produce probable floods on the main channel than that of sub-basins 1, 3, and 5. The construction of the dam will help reduce flooding on the main channel from sub-basin 4 but it will not prevent potential flooding from sub-basin 1, 3 and 5, which join the main channel downstream of sub-basin 4. Therefore, flood mitigation efforts should be considered in order to protect the settlement and agricultural lands on the floodplain downstream of the dam. In order to increase our understanding of flood hazards, and to determine appropriate mitigation solutions, drainage morphometry research should be included as an essential component to hydrologic studies.

  2. Evaluating DEM conditioning techniques, elevation source data, and grid resolution for field-scale hydrological parameter extraction

    NASA Astrophysics Data System (ADS)

    Woodrow, Kathryn; Lindsay, John B.; Berg, Aaron A.

    2016-09-01

    Although digital elevation models (DEMs) prove useful for a number of hydrological applications, they are often the end result of numerous processing steps that each contains uncertainty. These uncertainties have the potential to greatly influence DEM quality and to further propagate to DEM-derived attributes including derived surface and near-surface drainage patterns. This research examines the impacts of DEM grid resolution, elevation source data, and conditioning techniques on the spatial and statistical distribution of field-scale hydrological attributes for a 12,000 ha watershed of an agricultural area within southwestern Ontario, Canada. Three conditioning techniques, including depression filling (DF), depression breaching (DB), and stream burning (SB), were examined. The catchments draining to each boundary of 7933 agricultural fields were delineated using the surface drainage patterns modeled from LiDAR data, interpolated to a 1 m, 5 m, and 10 m resolution DEMs, and from a 10 m resolution photogrammetric DEM. The results showed that variation in DEM grid resolution resulted in significant differences in the spatial and statistical distributions of contributing areas and the distributions of downslope flowpath length. Degrading the grid resolution of the LiDAR data from 1 m to 10 m resulted in a disagreement in mapped contributing areas of between 29.4% and 37.3% of the study area, depending on the DEM conditioning technique. The disagreements among the field-scale contributing areas mapped from the 10 m LiDAR DEM and photogrammetric DEM were large, with nearly half of the study area draining to alternate field boundaries. Differences in derived contributing areas and flowpaths among various conditioning techniques increased substantially at finer grid resolutions, with the largest disagreement among mapped contributing areas occurring between the 1 m resolution DB DEM and the SB DEM (37% disagreement) and the DB-DF comparison (36.5% disagreement in mapped

  3. The evaluation of unmanned aerial system-based photogrammetry and terrestrial laser scanning to generate DEMs of agricultural watersheds

    NASA Astrophysics Data System (ADS)

    Ouédraogo, Mohamar Moussa; Degré, Aurore; Debouche, Charles; Lisein, Jonathan

    2014-06-01

    Agricultural watersheds tend to be places of intensive farming activities that permanently modify their microtopography. The surface characteristics of the soil vary depending on the crops that are cultivated in these areas. Agricultural soil microtopography plays an important role in the quantification of runoff and sediment transport because the presence of crops, crop residues, furrows and ridges may impact the direction of water flow. To better assess such phenomena, 3-D reconstructions of high-resolution agricultural watershed topography are essential. Fine-resolution topographic data collection technologies can be used to discern highly detailed elevation variability in these areas. Knowledge of the strengths and weaknesses of existing technologies used for data collection on agricultural watersheds may be helpful in choosing an appropriate technology. This study assesses the suitability of terrestrial laser scanning (TLS) and unmanned aerial system (UAS) photogrammetry for collecting the fine-resolution topographic data required to generate accurate, high-resolution digital elevation models (DEMs) in a small watershed area (12 ha). Because of farming activity, 14 TLS scans (≈ 25 points m- 2) were collected without using high-definition surveying (HDS) targets, which are generally used to mesh adjacent scans. To evaluate the accuracy of the DEMs created from the TLS scan data, 1098 ground control points (GCPs) were surveyed using a real time kinematic global positioning system (RTK-GPS). Linear regressions were then applied to each DEM to remove vertical errors from the TLS point elevations, errors caused by the non-perpendicularity of the scanner's vertical axis to the local horizontal plane, and errors correlated with the distance to the scanner's position. The scans were then meshed to generate a DEMTLS with a 1 × 1 m spatial resolution. The Agisoft PhotoScan and MicMac software packages were used to process the aerial photographs and generate a DEMPSC

  4. Numerical slope stability simulations of chasma walls in Valles Marineris/Mars using a distinct element method (dem).

    NASA Astrophysics Data System (ADS)

    Imre, B.

    2003-04-01

    NUMERICAL SLOPE STABILITY SIMULATIONS OF CHASMA WALLS IN VALLES MARINERIS/MARS USING A DISTINCT ELEMENT METHOD (DEM). B. Imre (1) (1) German Aerospace Center, Berlin Adlershof, bernd.imre@gmx.net The 8- to 10-km depths of Valles Marineris (VM) offer excellent views into the upper Martian crust. Layering, fracturing, lithology, stratigraphy and the content of volatiles have influenced the evolution of the Valles Marineris wallslopes. But these parameters also reflect the development of VM and its wall slopes. The scope of this work is to gain understanding in these parameters by back-simulating the development of wall slopes. For that purpose, the two dimensional Particle Flow Code PFC2D has been chosen (ITASCA, version 2.00-103). PFC2D is a distinct element code for numerical modelling of movements and interactions of assemblies of arbitrarily sized circular particles. Particles may be bonded together to represent a solid material. Movements of particles are unlimited. That is of importance because results of open systems with numerous unknown variables are non-unique and therefore highly path dependent. This DEM allows the simulation of whole development paths of VM walls what makes confirmation of the model more complete (e.g. Oreskes et al., Science 263, 1994). To reduce the number of unknown variables a proper (that means as simple as possible) field-site had to be selected. The northern wall of eastern Candor Chasma has been chosen. This wall is up to 8-km high and represents a significant outcrop of the upper Martian crust. It is quite uncomplex, well-aligned and of simple morphology. Currently the work on the model is at the stage of performing the parameter study. Results will be presented via poster by the EGS-Meeting.

  5. Terrain Classification of Aster gDEM for Seismic Microzonation of Port-Au Haiti, Using - and - Based Analytic Methods

    NASA Astrophysics Data System (ADS)

    Yong, A.; Hough, S. E.; Cox, B. R.; Rathje, E. M.; Bachhuber, J.; Hulslander, D.; Christiansen, L.; Abrams, M.

    2010-12-01

    The aftermath of the M7.0 Haiti earthquake of 12 January 2010 witnessed an impressive scientific response from the international community. In addition to conventional post-earthquake investigations, there was also an unprecedented reliance on remote-sensing technologies for scientific investigation and damage assessment. These technologies include sensors from both aerial and space-borne observational platforms. As part of the Haiti earthquake response and recovery effort, we develop a seismic zonation map of Port-au-Prince based on high-resolution satellite imagery as well as data from traditional seismographic monitoring stations and geotechnical site characterizations. Our imagery consists of a global digital elevation model (gDEM) of Hispaniola derived from data recorded by NASA-JPL's Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the multi-platform satellite Terra. To develop our model we also consider recorded waveforms from portable seismographic stations (Hough et al., in review) and 36 geotechnical shear-wave velocity surveys (Cox et al., in review). Following a similar approach developed by Yong et al. (2008; Bull. Seism Soc. Am.), we use both pixel- and object- based imaging analytic methods to systematically identify and extract local terrain features that are expected to amplify seismic ground motion. Using histogram-stretching techniques applied to the rDEM values, followed by multi-resolution, segmentations of the imagery into terrain types, we systematically classify the terrains of Hispaniola. By associating available Vs30 (average shear-wave velocity in the upper 30 meter depth) calculated from the MASW (Multi-channel Analysis of Surface Wave) survey method, we develop a first-order site characterization map. Our results indicate that the terrain-based Vs30 estimates are significantly associated with amplitudes recorded at station sites. We also find that the damage distribution inferred from UNOSAT

  6. Evaluating topographic and hydrologic attribute sensitivity to upscaled resolution DEMs from LIDAR data

    NASA Astrophysics Data System (ADS)

    Petroselli, A.; Santini, M.; Nardi, F.; Tarolli, P.; Grimaldi, S.

    2008-12-01

    Raster-based Digital Terrain Models (DTMs) have been extensively used for determining topographic attributes used for hydrologic modelling topographically based. Several studies have been reported that the hillslope hydrology response is strongly affected by the local topography. Despite the increasing availability of fine resolution topographic data captured by Light Detection And Ranging (LIDAR) technique, some drawbacks arise, both from the computational point of view, and also because the higher detail does not match with the other spatial attributes (e.g. land use, vegetation cover, climate etc.). A compromise is then needed to satisfy the computational effort, and at the same time make the spatial input homogeneous, by either downscaling the coarsest ones or by upscaling the finest ones. Usually, during resampling of original DTM, topographic details could be lost because of smoothing effects. For this reason it is necessary to investigate whether and how a coarser resolution DTM can preserve hydrologic information, crucial for modeling performances and reliability, as uncertainties in the inputs will be propagated into the output prediction, producing biases. In this work two case studies are presented using 1 m LIDAR DTMs. A series of DTMs having 5, 10 up to 20 m grid size are derived from the finest DTM of 1m, this applying standard resampling methods. Several topographic and hydrologic characteristics are tested at different grid cell sizes e.g. the wetness index, and the flow path length of the main channel, in order to test the changes in the lag time between precipitation and flow peak discharge, resulting in different hydrographs. Even if some of these attributes prove to have few differences in basin averages by changing DTM resolution, it is here shown that, unlike for the lumped models, where the heterogeneities are ignored, for the semi-distributed and distributed models the input spatial variability can affect significantly the results.

  7. An efficient and comprehensive method for drainage network extraction from DEM with billions of pixels using a size-balanced binary search tree

    NASA Astrophysics Data System (ADS)

    Bai, Rui; Li, Tiejian; Huang, Yuefei; Li, Jiaye; Wang, Guangqian

    2015-06-01

    With the increasing resolution of digital elevation models (DEMs), computational efficiency problems have been encountered when extracting the drainage network of a large river basin at billion-pixel scales. The efficiency of the most time-consuming depression-filling pretreatment has been improved by using the O(NlogN) complexity least-cost path search method, but the complete extraction steps following this method have not been proposed and tested. In this paper, an improved O(NlogN) algorithm was proposed by introducing a size-balanced binary search tree (BST) to improve the efficiency of the depression-filling pretreatment further. The following extraction steps, including the flow direction determination and the upslope area accumulation, were also redesigned to benefit from this improvement. Therefore, an efficient and comprehensive method was developed. The method was tested to extract drainage networks of 31 river basins with areas greater than 500,000 km2 from the 30-m-resolution ASTER GDEM and two sub-basins with areas of approximately 1000 km2 from the 1-m-resolution airborne LiDAR DEM. Complete drainage networks with both vector features and topographic parameters were obtained with time consumptions in O(NlogN) complexity. The results indicate that the developed method can be used to extract entire drainage networks from DEMs with billions of pixels with high efficiency.

  8. A Comparison of Elevation Between InSAR DEM and Reference DEMs

    NASA Astrophysics Data System (ADS)

    Yun, Ye; Zeng, Qiming; Jiao, Jian; Yan, Dapeng; Liang, Cunren; Wang, Qing; Zhou, Xiao

    2013-01-01

    Introduction (1) DEM generation Space borne SAR interferometry is one of the methods for the generation of digital elevation model (DEM). (2) Common methods to generate DEMs • Same antenna with two passes: e.g. ERS1/2 • Single-pass interferometry : e.g. SRTM • Geometry of stereopairs : e.g. SPOT and ASTER • Combination of air-photograph, satellite image, topographic map and field measurement : e.g. NGCC (National Geomatics Center of China, which has completed the establishment of 1:50000 topographic databases of China) (3) Purpose of this study Compare DEMs derived from ERS1/2 and common methods by comparison of tandem and reference DEMs which are SRTM DEM, ASTER GDEM and NGCC DEM. Some qualitative and quantitative assessments of the elevation were used to estimate the difference.

  9. The role of method of production and resolution of the DEM on slope-units delineation for landslide susceptibility assessment - Ubaye Valley, French Alps case study

    NASA Astrophysics Data System (ADS)

    Schlögel, Romy; Marchesini, Ivan; Alvioli, Massimiliano; Reichenbach, Paola; Rossi, Mauro; Malet, Jean-Philippe

    2016-04-01

    Landslide susceptibility assessment forms the basis of any hazard mapping, which is one of the essential parts of quantitative risk mapping. For the same study area, different susceptibility maps can be achieved depending on the type of susceptibility mapping methods, mapping unit, and scale. In the Ubaye Valley (South French Alps), we investigate the effect of resolution and method of production of the DEM to delineate slope units for landslide susceptibility mapping method. Slope units delineation has been processed using multiple combinations of circular variance and minimum area size values, which are the input parameters for a new software for terrain partitioning. We rely on this method taking into account homogeneity of aspect direction inside each unit and inhomogeneity between different units. We computed slope units delineation for 5, 10 and 25 meters resolution DEM, and investigate statistical distributions of morphometric variables within the different polygons. Then, for each different slope units partitioning, we calibrated a landslide susceptibility model, considering landslide bodies and scarps as a dependent variable (binary response). This work aims to analyse the role of DEM resolution on slope-units delineation for landslide susceptibility assessment. Area Under the Curve of the Receiver Operating Characteristic is investigated for the susceptibility model calculations. In addition, we analysed further the performance of the Logistic Regression Model by looking at the percentage of significant variable in the statistical analyses. Results show that smaller slope units have a better chance of containing a smaller number of thematic and morphometric variables, allowing for an easier classification. Reliability of the models according to the DEM resolution considered as well as scarp area and landslides bodies presence/absence as dependent variable are discussed.

  10. On the investigation of the performances of a DEM-based hydrogeomorphic floodplain identification method in a large urbanized river basin: the Tiber river case study in Italy

    NASA Astrophysics Data System (ADS)

    Nardi, Fernando; Biscarini, Chiara; Di Francesco, Silvia; Manciola, Piergiorgio

    2013-04-01

    consequently identified as those river buffers, draining towards the channel, with an elevation that is less than the maximum flow depth of the corresponding outlet. Keeping in mind that this hydrogeomorhic model performances are strictly related to the quality and properties of the input DEM and that the intent of this kind of methodology is not to substitute standard flood modeling and mapping methods, in this work the performances of this approach are qualitatively evaluated by comparing results with standard flood maps. The Tiber river basin was selected as case study, one of the main river basins in Italy covering a drainage area of approximately 17.000 km2. This comparison is interesting for understanding the performance of the model in a large and complex domain where the impact of the urbanization matrix is significant. Results of this investigation confirm the potential of such DEM-based floodplain mapping models for providing a fast timely homogeneous and continuous inundation scenario to urban planners and decision makers, but also the drawbacks of using such methodology where the humans are significantly and rapidly modifying the surface properties.

  11. Shading-based DEM refinement under a comprehensive imaging model

    NASA Astrophysics Data System (ADS)

    Peng, Jianwei; Zhang, Yi; Shan, Jie

    2015-12-01

    This paper introduces an approach to refine coarse digital elevation models (DEMs) based on the shape-from-shading (SfS) technique using a single image. Different from previous studies, this approach is designed for heterogeneous terrain and derived from a comprehensive (extended) imaging model accounting for the combined effect of atmosphere, reflectance, and shading. To solve this intrinsic ill-posed problem, the least squares method and a subsequent optimization procedure are applied in this approach to estimate the shading component, from which the terrain gradient is recovered with a modified optimization method. Integrating the resultant gradients then yields a refined DEM at the same resolution as the input image. The proposed SfS method is evaluated using 30 m Landsat-8 OLI multispectral images and 30 m SRTM DEMs. As demonstrated in this paper, the proposed approach is able to reproduce terrain structures with a higher fidelity; and at medium to large up-scale ratios, can achieve elevation accuracy 20-30% better than the conventional interpolation methods. Further, this property is shown to be stable and independent of topographic complexity. With the ever-increasing public availability of satellite images and DEMs, the developed technique is meaningful for global or local DEM product refinement.

  12. Extract relevant features from DEM for groundwater potential mapping

    NASA Astrophysics Data System (ADS)

    Liu, T.; Yan, H.; Zhai, L.

    2015-06-01

    Multi-criteria evaluation (MCE) method has been applied much in groundwater potential mapping researches. But when to data scarce areas, it will encounter lots of problems due to limited data. Digital Elevation Model (DEM) is the digital representations of the topography, and has many applications in various fields. Former researches had been approved that much information concerned to groundwater potential mapping (such as geological features, terrain features, hydrology features, etc.) can be extracted from DEM data. This made using DEM data for groundwater potential mapping is feasible. In this research, one of the most widely used and also easy to access data in GIS, DEM data was used to extract information for groundwater potential mapping in batter river basin in Alberta, Canada. First five determining factors for potential ground water mapping were put forward based on previous studies (lineaments and lineament density, drainage networks and its density, topographic wetness index (TWI), relief and convergence Index (CI)). Extraction methods of the five determining factors from DEM were put forward and thematic maps were produced accordingly. Cumulative effects matrix was used for weight assignment, a multi-criteria evaluation process was carried out by ArcGIS software to delineate the potential groundwater map. The final groundwater potential map was divided into five categories, viz., non-potential, poor, moderate, good, and excellent zones. Eventually, the success rate curve was drawn and the area under curve (AUC) was figured out for validation. Validation result showed that the success rate of the model was 79% and approved the method's feasibility. The method afforded a new way for researches on groundwater management in areas suffers from data scarcity, and also broaden the application area of DEM data.

  13. The Oracle of DEM

    NASA Astrophysics Data System (ADS)

    Gayley, Kenneth

    2013-06-01

    The predictions of the famous Greek oracle of Delphi were just ambiguous enough to seem to convey information, yet the user was only seeing their own thoughts. Are there ways in which X-ray spectral analysis is like that oracle? It is shown using heuristic, generic response functions to mimic actual spectral inversion that the widely known ill conditioning, which makes formal inversion impossible in the presence of random noise, also makes a wide variety of different source distributions (DEMs) produce quite similar X-ray continua and resonance-line fluxes. Indeed, the sole robustly inferable attribute for a thermal, optically thin resonance-line spectrum with normal abundances in CIE is its average temperature. The shape of the DEM distribution, on the other hand, is not well constrained, and may actually depend more on the analysis method, no matter how sophisticated, than on the source plasma. The case is made that X-ray spectra can tell us average temperature, and metallicity, and absorbing column, but the main thing it cannot tell us is the main thing it is most often used to infer: the differential emission measure distribution.

  14. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  15. The influence of slope profile extraction techniques and DEM resolution on 2D rockfall simulation

    NASA Astrophysics Data System (ADS)

    Wang, X.; Frattini, P.; Agliardi, F.; Crosta, G. B.

    2012-04-01

    The development of advanced 3D rockfall modelling algorithms and tools during the last decade has allowed to gain insights in the topographic controls on the quality and reliability of rockfall simulation results. These controls include DEM resolution and roughness, and depend on the adopted rockfall simulation approach and DEM generation techniques. Despite the development of 3D simulations, the 2D modelling approach still remains suitable and convenient in some cases. Therefore, the accuracy of high-quality 3D descriptions of topography must be preserved when extracting slope profiles for 2D simulations. In this perspective, this study compares and evaluates three different techniques commonly used to extract slope profiles from DEM, in order to assess their suitability and effects on rockfall simulation results. These methods include: (A) an "interpolated shape" method (ESRI 3D Analyst), (B) a raw raster sampling method (EZ Profiler), and (C) a vector TIN sampling method (ESRI 3D Analyst). The raster DEMs used in the study were all derived from the same TIN DEM used for method C. For raster DEM, the "interpolated shape" method (A) extracts the profile by bi-linear interpolating the elevation among the four neighbouring cells at each sampling location along the profile trace. The EZ Profiler extension (B) extracts the profile by sampling elevation values directly from the DEM raster grid at each sampling location. These methods have been compared to the extraction of profiles from TIN DEM (C), where slope profile elevations are directly obtained by sampling the TIN triangular facets. 2D rockfall simulations performed using a widely used commercial software (RocfallTM) with the different profiles show that: (1) method A and C provide similar results; (2) runout simulated using profiles obtained by method A is usually shorter than method C; (3) method B presents abrupt horizontal steps in the profiles, resulting in unrealistic runout. To study the influence of DEM

  16. Operational TanDEM-X DEM calibration and first validation results

    NASA Astrophysics Data System (ADS)

    Gruber, Astrid; Wessel, Birgit; Huber, Martin; Roth, Achim

    2012-09-01

    In June 2010, the German TanDEM-X satellite was launched. Together with its twin satellite TerraSAR-X it flies in a close formation enabling single-pass SAR interferometry. The primary goal of the TanDEM-X mission is the derivation of a global digital elevation model (DEM) with unprecedented global accuracies of 10 m in absolute and 2 m in relative height. A significant calibration effort is required to achieve this high quality world-wide. In spite of an intensive instrument calibration and a highly accurate orbit and baseline determination, some systematic height errors like offsets and tilts in the order of some meters remain in the interferometric DEMs and have to be determined and removed during the TanDEM-X DEM calibration. The objective of this article is the presentation of an approach for the estimation of correction parameters for remaining systematic height errors applicable to interferometric height models. The approach is based on a least-squares block adjustment using the elevation of ICESat GLA 14 data as ground control points and connecting points of adjacent, overlapping DEMs as tie-points. In the first part its implementation in DLR's ground segment is outlined. In the second part the approach is applied and validated for two of the first TanDEM-X DEM test sites. Therefore, independent reference data, in particular high resolution reference DEMs and GPS tracks, are used. The results show that the absolute height errors of the TanDEM-X DEM are small in these cases, mostly in the order of 1-2 m. An additional benefit of the proposed block adjustment method is that it improves the relative accuracy of adjacent DEMs.

  17. Statistic Tests Aided Multi-Source dem Fusion

    NASA Astrophysics Data System (ADS)

    Fu, C. Y.; Tsay, J. R.

    2016-06-01

    Since the land surface has been changing naturally or manually, DEMs have to be updated continually to satisfy applications using the latest DEM at present. However, the cost of wide-area DEM production is too high. DEMs, which cover the same area but have different quality, grid sizes, generation time or production methods, are called as multi-source DEMs. It provides a solution to fuse multi-source DEMs for low cost DEM updating. The coverage of DEM has to be classified according to slope and visibility in advance, because the precisions of DEM grid points in different areas with different slopes and visibilities are not the same. Next, difference DEM (dDEM) is computed by subtracting two DEMs. It is assumed that dDEM, which only contains random error, obeys normal distribution. Therefore, student test is implemented for blunder detection and three kinds of rejected grid points are generated. First kind of rejected grid points is blunder points and has to be eliminated. Another one is the ones in change areas, where the latest data are regarded as their fusion result. Moreover, the DEM grid points of type I error are correct data and have to be reserved for fusion. The experiment result shows that using DEMs with terrain classification can obtain better blunder detection result. A proper setting of significant levels (α) can detect real blunders without creating too many type I errors. Weighting averaging is chosen as DEM fusion algorithm. The priori precisions estimated by our national DEM production guideline are applied to define weights. Fisher's test is implemented to prove that the priori precisions correspond to the RMSEs of blunder detection result.

  18. Morphometric evaluation of the Afşin-Elbistan lignite basin using kernel density estimation and Getis-Ord's statistics of DEM derived indices, SE Turkey

    NASA Astrophysics Data System (ADS)

    Sarp, Gulcan; Duzgun, Sebnem

    2015-11-01

    A morphometric analysis of river network, basins and relief using geomorphic indices and geostatistical analyses of Digital Elevation Model (DEM) are useful tools for discussing the morphometric evolution of the basin area. In this study, three different indices including valley floor width to height ratio (Vf), stream gradient (SL), and stream sinuosity were applied to Afşin-Elbistan lignite basin to test the imprints of tectonic activity. Perturbations of these indices are usually indicative of differences in the resistance of outcropping lithological units to erosion and active faulting. To map the clusters of high and low indices values, the Kernel density estimation (K) and the Getis-Ord Gi∗ statistics were applied to the DEM-derived indices. The K method and Gi∗ statistic highlighting hot spots and cold spots of the SL index, the stream sinuosity and the Vf index values helped to identify the relative tectonic activity of the basin area. The results indicated that the estimation by the K and Gi∗ including three conceptualization of spatial relationships (CSR) for hot spots (percent volume contours 50 and 95 categorized as high and low respectively) yielded almost similar results in regions of high tectonic activity and low tectonic activity. According to the K and Getis-Ord Gi∗ statistics, the northern, northwestern and southern parts of the basin indicates a high tectonic activity. On the other hand, low elevation plain in the central part of the basin area shows a relatively low tectonic activity.

  19. Rapid Geometric Correction of SSC Terrasar-X Images with Direct Georeferencing, Global dem and Global Geoid Models

    NASA Astrophysics Data System (ADS)

    Vassilaki, D. I.; Stamos, A. A.; Ioannidis, C.

    2013-05-01

    In this paper a process for rapid geometric correction of slant range SAR images is presented. The process is completely independent of ground control information thanks to the direct georeferencing method capabilities offered by the TerraSAR-X sensor. The process is especially rapid due to the use of readily available global DEMs and global geoid models. An additional advantage of this process is its flexibility. If a more accurate local DEM or local geoid model is readily available it can be used instead of the global DEM or global geoid model. The process is applied to geometrically correct a SSC TerraSAR-X image over a sub-urban mountainous area using the SRTM and the ASTER global DEMs and the EGM2008 global geoid model. Additionally two local, more accurate DEMs, are used. The accuracy of the process is evaluated by independent check points.

  20. Accuracy and reliability of the Hungarian digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Detrekoi, Akos; Melykuti, Gabor; Szabo, Gyorgy

    1994-08-01

    In the period 1991-92 a 50 X 50 meter grid digital elevation model (DEM) was created in Hungary. The design and the quality control of DEM are discussed in this paper. The paper has three parts: (1) the data acquisition methods for DEM by scanning and photogrammetry are discussed, (2) a general overview about the accuracy and reliability of DEMs is given, and (3) the algorithm for the checking of data and some general conclusions about the control activity of the Hungarian DEM are reviewed.

  1. Quality Test Various Existing dem in Indonesia Toward 10 Meter National dem

    NASA Astrophysics Data System (ADS)

    Amhar, Fahmi

    2016-06-01

    Indonesia has various DEM from many sources and various acquisition date spreaded in the past two decades. There are DEM from spaceborne system (Radarsat, TerraSAR-X, ALOS, ASTER-GDEM, SRTM), airborne system (IFSAR, Lidar, aerial photos) and also terrestrial one. The research objective is the quality test and how to extract best DEM in particular area. The method is using differential GPS levelling using geodetic GPS equipment on places which is ensured not changed during past 20 years. The result has shown that DEM from TerraSAR-X and SRTM30 have the best quality (rmse 3.1 m and 3.5 m respectively). Based on this research, it was inferred that these parameters are still positively correlated with the basic concept, namely that the lower and the higher the spatial resolution of a DEM data, the more imprecise the resulting vertical height.

  2. Nonlocal similarity based DEM super resolution

    NASA Astrophysics Data System (ADS)

    Xu, Zekai; Wang, Xuewen; Chen, Zixuan; Xiong, Dongping; Ding, Mingyue; Hou, Wenguang

    2015-12-01

    This paper discusses a new topic, DEM super resolution, to improve the resolution of an original DEM based on its partial new measurements obtained with high resolution. A nonlocal algorithm is introduced to perform this task. The original DEM was first divided into overlapping patches, which were classified either as "test" or "learning" data depending on whether or not they are related to high resolution measurements. For each test patch, the similar patches in the learning dataset were identified via template matching. Finally, the high resolution DEM of the test patch was restored by the weighted sum of similar patches under the condition that the reconstruction weights were the same in different resolution cases. A key assumption of this strategy is that there are some repeated or similar modes in the original DEM, which is quite common. Experiments were done to demonstrate that we can restore a DEM by preserving the details without introducing artifacts. Statistic analysis was also conducted to show that this method can obtain higher accuracy than traditional interpolation methods.

  3. Development of parallel DEM for the open source code MFIX

    SciTech Connect

    Gopalakrishnan, Pradeep; Tafti, Danesh

    2013-02-01

    The paper presents the development of a parallel Discrete Element Method (DEM) solver for the open source code, Multiphase Flow with Interphase eXchange (MFIX) based on the domain decomposition method. The performance of the code was evaluated by simulating a bubbling fluidized bed with 2.5 million particles. The DEM solver shows strong scalability up to 256 processors with an efficiency of 81%. Further, to analyze weak scaling, the static height of the fluidized bed was increased to hold 5 and 10 million particles. The results show that global communication cost increases with problem size while the computational cost remains constant. Further, the effects of static bed height on the bubble hydrodynamics and mixing characteristics are analyzed.

  4. Evaluation Methods Sourcebook.

    ERIC Educational Resources Information Center

    Love, Arnold J., Ed.

    The chapters commissioned for this book describe key aspects of evaluation methodology as they are practiced in a Canadian context, providing representative illustrations of recent developments in evaluation methodology as it is currently applied. The following chapters are included: (1) "Program Evaluation with Limited Fiscal and Human Resources"…

  5. Selection: Evaluation and methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Procedures to collect and to analyze data for genetic improvement of dairy cattle are described. Methods of identification and milk recording are presented. Selection traits include production (milk, fat, and protein yields and component percentages), conformation (final score and linear type traits...

  6. Failure and frictional sliding envelopes in three-dimensional stress space: Insights from Distinct Element Method (DEM) models and implications for the brittle-ductile transition of rock

    NASA Astrophysics Data System (ADS)

    Schöpfer, Martin; Childs, Conrad; Manzocchi, Tom

    2013-04-01

    Rocks deformed at low confining pressure are brittle, meaning that after peak stress the strength decreases to a residual value determined by frictional sliding. The difference between the peak and residual value is the stress drop. At high confining pressure, however, no stress drop occurs. The transition pressure at which no loss in strength occurs is a possible definition of the brittle-ductile transition. The Distinct Element Method (DEM) is used to illustrate how this type of brittle-ductile transition emerges from a simple model in which rock is idealised as an assemblage of cemented spherical unbreakable grains. These bonded particle models are subjected to loading under constant mean stress and stress ratio conditions using distortional periodic space, which eliminates possible boundary effects arising from the usage of rigid loading platens. Systematic variation of both mean stress and stress ratio allowed determination of the complete three dimensional yield, peak stress and residual strength envelopes. The models suggest that the brittle-ductile transition is a mean stress and stress ratio dependent space curve, which cannot be adequately described by commonly used failure criteria (e.g., Mohr-Coulomb, Drucker-Prager). The model peak strength data exhibit an intermediate principal stress dependency which is, at least qualitatively, similar to that observed for natural rocks deformed under polyaxial laboratory conditions. Comparison of failure envelopes determined for bonded particle models with and without bond shear failure suggests that the non-linear pressure dependence of strength (concave failure envelopes) is, at high mean stress, the result of microscopic shear failure, a result consistent with earlier two-dimensional numerical multiple-crack simulations [D. A. Lockner & T. R. Madden, JGR, Vol. 96, No. B12, 1991]. Our results may have implications for a wide range of geophysical research areas, including the strength of the crust, the seismogenic

  7. EVALUATION OF IGNITABILITY METHODS (LIQUIDS)

    EPA Science Inventory

    The purpose of the research was to evaluate the ignitability Methods 1010 (Pensky-Martens) and 1020 (Setaflash) as described by OSW Manual SW846 (1). The effort was designed to provide information on accuracy and precision of the two methods. During Phase I of the task, six stand...

  8. Evaluation of modal testing methods

    NASA Technical Reports Server (NTRS)

    Chen, J.-C.

    1984-01-01

    Modal tests are playing an increasingly important role in structural dynamics efforts which are in need of analytical model verification or trouble shootings. In the meantime, the existing modal testing methods are undergoing great changes as well as new methods are being created. Although devoted advocates of each method can be found to argue the relative advantages and disadvantages, the general superiority, if any, of one or the other is not yet evident. The Galileo spacecraft, a realistic, complex structural system, will be used as a test article for performing modal tests by various methods. The results will be used to evaluate the relative merits of the various modal testing methods.

  9. Satellite-derived Digital Elevation Model (DEM) selection, preparation and correction for hydrodynamic modelling in large, low-gradient and data-sparse catchments

    NASA Astrophysics Data System (ADS)

    Jarihani, Abdollah A.; Callow, John N.; McVicar, Tim R.; Van Niel, Thomas G.; Larsen, Joshua R.

    2015-05-01

    Digital Elevation Models (DEMs) that accurately replicate both landscape form and processes are critical to support modelling of environmental processes. Topographic accuracy, methods of preparation and grid size are all important for hydrodynamic models to efficiently replicate flow processes. In remote and data-scarce regions, high resolution DEMs are often not available and therefore it is necessary to evaluate lower resolution data such as the Shuttle Radar Topography Mission (SRTM) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) for use within hydrodynamic models. This paper does this in three ways: (i) assessing point accuracy and geometric co-registration error of the original DEMs; (ii) quantifying the effects of DEM preparation methods (vegetation smoothed and hydrologically-corrected) on hydrodynamic modelling relative accuracy; and (iii) quantifying the effect of the hydrodynamic model grid size (30-2000 m) and the associated relative computational costs (run time) on relative accuracy in model outputs. We initially evaluated the accuracy of the original SRTM (∼30 m) seamless C-band DEM (SRTM DEM) and second generation products from the ASTER (ASTER GDEM) against registered survey marks and altimetry data points from the Ice, Cloud, and land Elevation Satellite (ICESat). SRTM DEM (RMSE = 3.25 m,) had higher accuracy than ASTER GDEM (RMSE = 7.43 m). Based on these results, the original version of SRTM DEM, the ASTER GDEM along with vegetation smoothed and hydrologically corrected versions were prepared and used to simulate three flood events along a 200 km stretch of the low-gradient Thompson River, in arid Australia (using five metrics: peak discharge, peak height, travel time, terminal water storage and flood extent). The hydrologically corrected DEMs performed best across these metrics in simulating floods compared with vegetation smoothed DEMs and original DEMs. The response of model performance to grid size was non

  10. Separability of soils in a tallgrass prairie using SPOT and DEM data

    NASA Technical Reports Server (NTRS)

    Su, Haiping; Ransom, Michel D.; Yang, Shie-Shien; Kanemasu, Edward T.

    1990-01-01

    An investigation is conducted which uses a canonical transformation technique to reduce the features from SPOT and DEM data and evaluates the statistical separability of several prairie soils from the canonically transformed variables. Both SPOT and DEM data was gathered for a tallgrass prairie near Manhattan, Kansas, and high resolution SPOT satellite images were integrated with DEM data. Two canonical variables derived from training samples were selected and it is suggested that canonically transformed data were superior to combined SPOT and DEM data. High resolution SPOT images and DEM data can be used to aid second-order soil surveys in grasslands.

  11. Hydrologic enforcement of lidar DEMs

    USGS Publications Warehouse

    Poppenga, Sandra K.; Worstell, Bruce B.; Danielson, Jeffrey J.; Brock, John C.; Evans, Gayla A.; Heidemann, H. Karl

    2014-01-01

    Hydrologic-enforcement (hydro-enforcement) of light detection and ranging (lidar)-derived digital elevation models (DEMs) modifies the elevations of artificial impediments (such as road fills or railroad grades) to simulate how man-made drainage structures such as culverts or bridges allow continuous downslope flow. Lidar-derived DEMs contain an extremely high level of topographic detail; thus, hydro-enforced lidar-derived DEMs are essential to the U.S. Geological Survey (USGS) for complex modeling of riverine flow. The USGS Coastal and Marine Geology Program (CMGP) is integrating hydro-enforced lidar-derived DEMs (land elevation) and lidar-derived bathymetry (water depth) to enhance storm surge modeling in vulnerable coastal zones.

  12. LNG Safety Assessment Evaluation Methods

    SciTech Connect

    Muna, Alice Baca; LaFleur, Angela Christine

    2015-05-01

    Sandia National Laboratories evaluated published safety assessment methods across a variety of industries including Liquefied Natural Gas (LNG), hydrogen, land and marine transportation, as well as the US Department of Defense (DOD). All the methods were evaluated for their potential applicability for use in the LNG railroad application. After reviewing the documents included in this report, as well as others not included because of repetition, the Department of Energy (DOE) Hydrogen Safety Plan Checklist is most suitable to be adapted to the LNG railroad application. This report was developed to survey industries related to rail transportation for methodologies and tools that can be used by the FRA to review and evaluate safety assessments submitted by the railroad industry as a part of their implementation plans for liquefied or compressed natural gas storage ( on-board or tender) and engine fueling delivery systems. The main sections of this report provide an overview of various methods found during this survey. In most cases, the reference document is quoted directly. The final section provides discussion and a recommendation for the most appropriate methodology that will allow efficient and consistent evaluations to be made. The DOE Hydrogen Safety Plan Checklist was then revised to adapt it as a methodology for the Federal Railroad Administration’s use in evaluating safety plans submitted by the railroad industry.

  13. Improving merge methods for grid-based digital elevation models

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; Prodanović, D.; Maksimović, Č.

    2016-03-01

    Digital Elevation Models (DEMs) are used to represent the terrain in applications such as, for example, overland flow modelling or viewshed analysis. DEMs generated from digitising contour lines or obtained by LiDAR or satellite data are now widely available. However, in some cases, the area of study is covered by more than one of the available elevation data sets. In these cases the relevant DEMs may need to be merged. The merged DEM must retain the most accurate elevation information available while generating consistent slopes and aspects. In this paper we present a thorough analysis of three conventional grid-based DEM merging methods that are available in commercial GIS software. These methods are evaluated for their applicability in merging DEMs and, based on evaluation results, a method for improving the merging of grid-based DEMs is proposed. DEMs generated by the proposed method, called MBlend, showed significant improvements when compared to DEMs produced by the three conventional methods in terms of elevation, slope and aspect accuracy, ensuring also smooth elevation transitions between the original DEMs. The results produced by the improved method are highly relevant different applications in terrain analysis, e.g., visibility, or spotting irregularities in landforms and for modelling terrain phenomena, such as overland flow.

  14. Evaluation of turbulence mitigation methods

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; Huebner, Claudia S.; Dijk, Judith; Schutte, Klamer; Schwering, Piet B. W.

    2014-05-01

    Atmospheric turbulence is a well-known phenomenon that diminishes the recognition range in visual and infrared image sequences. There exist many different methods to compensate for the effects of turbulence. This paper focuses on the performance of two software-based methods to mitigate the effects of low- and medium turbulence conditions. Both methods are capable of processing static and dynamic scenes. The first method consists of local registration, frame selection, blur estimation and deconvolution. The second method consists of local motion compensation, fore- /background segmentation and weighted iterative blind deconvolution. A comparative evaluation using quantitative measures is done on some representative sequences captured during a NATO SET 165 trial in Dayton. The amount of blurring and tilt in the imagery seem to be relevant measures for such an evaluation. It is shown that both methods improve the imagery by reducing the blurring and tilt and therefore enlarge the recognition range. Furthermore, results of a recognition experiment using simulated data are presented that show that turbulence mitigation using the first method improves the recognition range up to 25% for an operational optical system.

  15. The Double Hierarchy Method. A parallel 3D contact method for the interaction of spherical particles with rigid FE boundaries using the DEM

    NASA Astrophysics Data System (ADS)

    Santasusana, Miquel; Irazábal, Joaquín; Oñate, Eugenio; Carbonell, Josep Maria

    2016-07-01

    In this work, we present a new methodology for the treatment of the contact interaction between rigid boundaries and spherical discrete elements (DE). Rigid body parts are present in most of large-scale simulations. The surfaces of the rigid parts are commonly meshed with a finite element-like (FE) discretization. The contact detection and calculation between those DE and the discretized boundaries is not straightforward and has been addressed by different approaches. The algorithm presented in this paper considers the contact of the DEs with the geometric primitives of a FE mesh, i.e. facet, edge or vertex. To do so, the original hierarchical method presented by Horner et al. (J Eng Mech 127(10):1027-1032, 2001) is extended with a new insight leading to a robust, fast and accurate 3D contact algorithm which is fully parallelizable. The implementation of the method has been developed in order to deal ideally with triangles and quadrilaterals. If the boundaries are discretized with another type of geometries, the method can be easily extended to higher order planar convex polyhedra. A detailed description of the procedure followed to treat a wide range of cases is presented. The description of the developed algorithm and its validation is verified with several practical examples. The parallelization capabilities and the obtained performance are presented with the study of an industrial application example.

  16. The Double Hierarchy Method. A parallel 3D contact method for the interaction of spherical particles with rigid FE boundaries using the DEM

    NASA Astrophysics Data System (ADS)

    Santasusana, Miquel; Irazábal, Joaquín; Oñate, Eugenio; Carbonell, Josep Maria

    2016-04-01

    In this work, we present a new methodology for the treatment of the contact interaction between rigid boundaries and spherical discrete elements (DE). Rigid body parts are present in most of large-scale simulations. The surfaces of the rigid parts are commonly meshed with a finite element-like (FE) discretization. The contact detection and calculation between those DE and the discretized boundaries is not straightforward and has been addressed by different approaches. The algorithm presented in this paper considers the contact of the DEs with the geometric primitives of a FE mesh, i.e. facet, edge or vertex. To do so, the original hierarchical method presented by Horner et al. (J Eng Mech 127(10):1027-1032, 2001) is extended with a new insight leading to a robust, fast and accurate 3D contact algorithm which is fully parallelizable. The implementation of the method has been developed in order to deal ideally with triangles and quadrilaterals. If the boundaries are discretized with another type of geometries, the method can be easily extended to higher order planar convex polyhedra. A detailed description of the procedure followed to treat a wide range of cases is presented. The description of the developed algorithm and its validation is verified with several practical examples. The parallelization capabilities and the obtained performance are presented with the study of an industrial application example.

  17. A comparative appraisal of hydrological behavior of SRTM DEM at catchment level

    NASA Astrophysics Data System (ADS)

    Sharma, Arabinda; Tiwari, K. N.

    2014-11-01

    The Shuttle Radar Topography Mission (SRTM) data has emerged as a global elevation data in the past one decade because of its free availability, homogeneity and consistent accuracy compared to other global elevation dataset. The present study explores the consistency in hydrological behavior of the SRTM digital elevation model (DEM) with reference to easily available regional 20 m contour interpolated DEM (TOPO DEM). Analysis ranging from simple vertical accuracy assessment to hydrological simulation of the studied Maithon catchment, using empirical USLE model and semidistributed, physical SWAT model, were carried out. Moreover, terrain analysis involving hydrological indices was performed for comparative assessment of the SRTM DEM with respect to TOPO DEM. Results reveal that the vertical accuracy of SRTM DEM (±27.58 m) in the region is less than the specified standard (±16 m). Statistical analysis of hydrological indices such as topographic wetness index (TWI), stream power index (SPI), slope length factor (SLF) and geometry number (GN) shows a significant differences in hydrological properties of the two studied DEMs. Estimation of soil erosion potentials of the catchment and conservation priorities of microwatersheds of the catchment using SRTM DEM and TOPO DEM produce considerably different results. Prediction of soil erosion potential using SRTM DEM is far higher than that obtained using TOPO DEM. Similarly, conservation priorities determined using the two DEMs are found to be agreed for only 34% of microwatersheds of the catchment. ArcSWAT simulation reveals that runoff predictions are less sensitive to selection of the two DEMs as compared to sediment yield prediction. The results obtained in the present study are vital to hydrological analysis as it helps understanding the hydrological behavior of the DEM without being influenced by the model structural as well as parameter uncertainty. It also reemphasized that SRTM DEM can be a valuable dataset for

  18. EMDataBank unified data resource for 3DEM.

    PubMed

    Lawson, Catherine L; Patwardhan, Ardan; Baker, Matthew L; Hryc, Corey; Garcia, Eduardo Sanz; Hudson, Brian P; Lagerstedt, Ingvar; Ludtke, Steven J; Pintilie, Grigore; Sala, Raul; Westbrook, John D; Berman, Helen M; Kleywegt, Gerard J; Chiu, Wah

    2016-01-01

    Three-dimensional Electron Microscopy (3DEM) has become a key experimental method in structural biology for a broad spectrum of biological specimens from molecules to cells. The EMDataBank project provides a unified portal for deposition, retrieval and analysis of 3DEM density maps, atomic models and associated metadata (emdatabank.org). We provide here an overview of the rapidly growing 3DEM structural data archives, which include maps in EM Data Bank and map-derived models in the Protein Data Bank. In addition, we describe progress and approaches toward development of validation protocols and methods, working with the scientific community, in order to create a validation pipeline for 3DEM data. PMID:26578576

  19. EMDataBank unified data resource for 3DEM

    PubMed Central

    Lawson, Catherine L.; Patwardhan, Ardan; Baker, Matthew L.; Hryc, Corey; Garcia, Eduardo Sanz; Hudson, Brian P.; Lagerstedt, Ingvar; Ludtke, Steven J.; Pintilie, Grigore; Sala, Raul; Westbrook, John D.; Berman, Helen M.; Kleywegt, Gerard J.; Chiu, Wah

    2016-01-01

    Three-dimensional Electron Microscopy (3DEM) has become a key experimental method in structural biology for a broad spectrum of biological specimens from molecules to cells. The EMDataBank project provides a unified portal for deposition, retrieval and analysis of 3DEM density maps, atomic models and associated metadata (emdatabank.org). We provide here an overview of the rapidly growing 3DEM structural data archives, which include maps in EM Data Bank and map-derived models in the Protein Data Bank. In addition, we describe progress and approaches toward development of validation protocols and methods, working with the scientific community, in order to create a validation pipeline for 3DEM data. PMID:26578576

  20. Validation of an LC-MS/MS method for the determination of epirubicin in human serum of patients undergoing drug eluting microsphere-transarterial chemoembolization (DEM-TACE).

    PubMed

    Sottani, Cristina; Leoni, Emanuela; Porro, Benedetta; Montagna, Benedetta; Amatu, Alessio; Sottotetti, Federico; Quaretti, Pietro; Poggi, Guido; Minoia, Claudio

    2009-11-01

    Drug Eluting Microsphere-Transarterial Chemoembolization (DEM-TACE) is a new delivery system to administrate drugs in a controlled manner useful for application in the chemoembolization of colorectal cancer metastases to the liver. DEM-TACE is focused to obtain higher concentrations of the drug to the tumor with lower systemic concentrations than traditional cancer chemotherapy. Therefore a specific, precise and sensitive LC-ESI-MS/MS assay procedure was properly designed to detect and quantify epirubicin at the concentrations expected from a transarterial chemoembolization with microspheres. Serum samples were kept acidic (pH approximately of 3.5) and sample preparation consisted of a solid phase extraction (SPE) procedure with HLB OASIS cartridges using a methylene chloride/2-propanol/methanol mixture solution to recover epirubicin. The analyses consisted of reversed-phase high-performance liquid chromatography (rp-HPLC) coupled with tandem mass spectrometry (MS/MS). Accuracy, precision and matrix effect of this procedure were carried out by analyzing four quality control samples (QCs) on five separate days. The validation parameters were assessed by recovery studies of spiked serum samples. Recoveries were found to vary between 92 and 98% at the QC levels (5, 40, 80 and 150 microg/L) with relative standard deviation (RSD) always less than 3.7%. The limit of detection (LOD) was set at 1 microg/L. The developed procedure has been also applied to investigate the different capability of two types of commercially available microspheres to release epirubicin into the human circulatory system. PMID:19783235

  1. Incorporating DEM uncertainty in coastal inundation mapping.

    PubMed

    Leon, Javier X; Heuvelink, Gerard B M; Phinn, Stuart R

    2014-01-01

    Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey

  2. Incorporating DEM Uncertainty in Coastal Inundation Mapping

    PubMed Central

    Leon, Javier X.; Heuvelink, Gerard B. M.; Phinn, Stuart R.

    2014-01-01

    Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey

  3. Radar and Lidar Radar DEM

    NASA Technical Reports Server (NTRS)

    Liskovich, Diana; Simard, Marc

    2011-01-01

    Using radar and lidar data, the aim is to improve 3D rendering of terrain, including digital elevation models (DEM) and estimates of vegetation height and biomass in a variety of forest types and terrains. The 3D mapping of vegetation structure and the analysis are useful to determine the role of forest in climate change (carbon cycle), in providing habitat and as a provider of socio-economic services. This in turn will lead to potential for development of more effective land-use management. The first part of the project was to characterize the Shuttle Radar Topography Mission DEM error with respect to ICESat/GLAS point estimates of elevation. We investigated potential trends with latitude, canopy height, signal to noise ratio (SNR), number of LiDAR waveform peaks, and maximum peak width. Scatter plots were produced for each variable and were fitted with 1st and 2nd degree polynomials. Higher order trends were visually inspected through filtering with a mean and median filter. We also assessed trends in the DEM error variance. Finally, a map showing how DEM error was geographically distributed globally was created.

  4. Impacts of DEM uncertainties on critical source areas identification for non-point source pollution control based on SWAT model

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Dong, Guangxia; Wang, Qingrui; Liu, Lumeng; Yu, Wenwen; Men, Cong; Liu, Ruimin

    2016-09-01

    The impacts of different digital elevation model (DEM) resolutions, sources and resampling techniques on nutrient simulations using the Soil and Water Assessment Tool (SWAT) model have not been well studied. The objective of this study was to evaluate the sensitivities of DEM resolutions (from 30 m to 1000 m), sources (ASTER GDEM2, SRTM and Topo-DEM) and resampling techniques (nearest neighbor, bilinear interpolation, cubic convolution and majority) to identification of non-point source (NPS) critical source area (CSA) based on nutrient loads using the SWAT model. The Xiangxi River, one of the main tributaries of Three Gorges Reservoir in China, was selected as the study area. The following findings were obtained: (1) Elevation and slope extracted from the DEMs were more sensitive to DEM resolution changes. Compared with the results of the 30 m DEM, 1000 m DEM underestimated the elevation and slope by 104 m and 41.57°, respectively; (2) The numbers of subwatersheds and hydrologic response units (HRUs) were considerably influenced by DEM resolutions, but the total nitrogen (TN) and total phosphorus (TP) loads of each subwatershed showed higher correlations with different DEM sources; (3) DEM resolutions and sources had larger effects on CSAs identifications, while TN and TP CSAs showed different response to DEM uncertainties. TN CSAs were more sensitive to resolution changes, exhibiting six distribution patterns at all DEM resolutions. TP CSAs were sensitive to source and resampling technique changes, exhibiting three distribution patterns for DEM sources and two distribution patterns for DEM resampling techniques. DEM resolutions and sources are the two most sensitive SWAT model DEM parameters that must be considered when nutrient CSAs are identified.

  5. DEM Particle Fracture Model

    SciTech Connect

    Zhang, Boning; Herbold, Eric B.; Homel, Michael A.; Regueiro, Richard A.

    2015-12-01

    An adaptive particle fracture model in poly-ellipsoidal Discrete Element Method is developed. The poly-ellipsoidal particle will break into several sub-poly-ellipsoids by Hoek-Brown fracture criterion based on continuum stress and the maximum tensile stress in contacts. Also Weibull theory is introduced to consider the statistics and size effects on particle strength. Finally, high strain-rate split Hopkinson pressure bar experiment of silica sand is simulated using this newly developed model. Comparisons with experiments show that our particle fracture model can capture the mechanical behavior of this experiment very well, both in stress-strain response and particle size redistribution. The effects of density and packings o the samples are also studied in numerical examples.

  6. Processing, validating, and comparing DEMs for geomorphic application on the Puna de Atacama Plateau, northwest Argentina

    NASA Astrophysics Data System (ADS)

    Purinton, Benjamin; Bookhagen, Bodo

    2016-04-01

    This study analyzes multiple topographic datasets derived from various remote-sensing methods from the Pocitos Basin of the central Puna Plateau in northwest Argentina at the border to Chile. Here, the arid climate and clear atmospheric conditions and lack of vegetation provide ideal conditions for remote sensing and Digital Elevation Model (DEM) comparison. We compare the following freely available DEMs: SRTM-X (spatial resolution of ~30 m), SRTM-C v4.1 (90 m), and ASTER GDEM2 (30 m). Additional DEMs for comparison are generated from optical and radar datasets acquired freely (ASTER Level 1B stereo pairs and Sentinal-1A radar), through research agreements (RapidEye Level 1B scenes, ALOS radar, and ENVISAT radar), and through commercial sources (TerraSAR-X / TanDEM-X radar). DEMs from ASTER (spatial resolution of 15 m) and RapidEye (~5-10 m) optical datasets are produced by standard photogrammetric techniques and have been post-processed for validation and alignment purposes. Because RapidEye scenes are captured at a low incidence angle (<20°) and stereo pairs are unavailable, merging and averaging methods of two to four overlapping scenes is explored for effective DEM generation. Sentinal-1A, TerraSAR-X / TanDEM-X, ALOS, and ENVISAT radar data is processed through interferometry resulting in DEMs with spatial resolutions ranging from 5 to 30 meters. The SRTM-X dataset serves as a control in the creation of further DEMs, as it is widely used in the geosciences and represents the highest-quality DEM currently available. All DEMs are validated against over 400,000 differential GPS (dGPS) measurements gathered during four field campaigns in 2012 and 2014 to 2016. Of these points, more than 250,000 lie within the Pocitos Basin with average vertical and horizontal accuracies of 0.95 m and 0.69 m, respectively. Dataset accuracy is judged by the lowest standard deviations of elevation compared with the dGPS data and with the SRTM-X control DEM. Of particular interest in

  7. Generating DEM from LIDAR data - comparison of available software tools

    NASA Astrophysics Data System (ADS)

    Korzeniowska, K.; Lacka, M.

    2011-12-01

    In recent years many software tools and applications have appeared that offer procedures, scripts and algorithms to process and visualize ALS data. This variety of software tools and of "point cloud" processing methods contributed to the aim of this study: to assess algorithms available in various software tools that are used to classify LIDAR "point cloud" data, through a careful examination of Digital Elevation Models (DEMs) generated from LIDAR data on a base of these algorithms. The works focused on the most important available software tools: both commercial and open source ones. Two sites in a mountain area were selected for the study. The area of each site is 0.645 sq km. DEMs generated with analysed software tools ware compared with a reference dataset, generated using manual methods to eliminate non ground points. Surfaces were analysed using raster analysis. Minimum, maximum and mean differences between reference DEM and DEMs generated with analysed software tools were calculated, together with Root Mean Square Error. Differences between DEMs were also examined visually using transects along the grid axes in the test sites.

  8. Pragmatism, Evidence, and Mixed Methods Evaluation

    ERIC Educational Resources Information Center

    Hall, Jori N.

    2013-01-01

    Mixed methods evaluation has a long-standing history of enhancing the credibility of evaluation findings. However, using mixed methods in a utilitarian way implicitly emphasizes convenience over engaging with its philosophical underpinnings (Denscombe, 2008). Because of this, some mixed methods evaluators and social science researchers have been…

  9. EVALUATION OF COMPOSITE RECEPTOR METHODS

    EPA Science Inventory

    A composite receptor model for PM-10 apportionment was evaluated to determine the stability of its solutions and to devise cost-effective measurement strategies. Ambient aerosol samples used in the evaluation were obtained with dichotomous samplers at three sites in the vicinity ...

  10. Local validation of EU-DEM using Least Squares Collocation

    NASA Astrophysics Data System (ADS)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  11. Visualising DEM-related flood-map uncertainties using a disparity-distance equation algorithm

    NASA Astrophysics Data System (ADS)

    Brandt, S. Anders; Lim, Nancy J.

    2016-05-01

    The apparent absoluteness of information presented by crisp-delineated flood boundaries can lead to misconceptions among planners about the inherent uncertainties associated in generated flood maps. Even maps based on hydraulic modelling using the highest-resolution digital elevation models (DEMs), and calibrated with the most optimal Manning's roughness (n) coefficients, are susceptible to errors when compared to actual flood boundaries, specifically in flat areas. Therefore, the inaccuracies in inundation extents, brought about by the characteristics of the slope perpendicular to the flow direction of the river, have to be accounted for. Instead of using the typical Monte Carlo simulation and probabilistic methods for uncertainty quantification, an empirical-based disparity-distance equation that considers the effects of both the DEM resolution and slope was used to create prediction-uncertainty zones around the resulting inundation extents of a one-dimensional (1-D) hydraulic model. The equation was originally derived for the Eskilstuna River where flood maps, based on DEM data of different resolutions, were evaluated for the slope-disparity relationship. To assess whether the equation is applicable to another river with different characteristics, modelled inundation extents from the Testebo River were utilised and tested with the equation. By using the cross-sectional locations, water surface elevations, and DEM, uncertainty zones around the original inundation boundary line can be produced for different confidences. The results show that (1) the proposed method is useful both for estimating and directly visualising model inaccuracies caused by the combined effects of slope and DEM resolution, and (2) the DEM-related uncertainties alone do not account for the total inaccuracy of the derived flood map. Decision-makers can apply it to already existing flood maps, thereby recapitulating and re-analysing the inundation boundaries and the areas that are uncertain

  12. Influence of DEM resolution on drainage network extraction: A multifractal analysis

    NASA Astrophysics Data System (ADS)

    Ariza-Villaverde, A. B.; Jiménez-Hornero, F. J.; Gutiérrez de Ravé, E.

    2015-07-01

    Different hydrological algorithms have been developed to automatically extract drainage networks from digital elevation models (DEMs). D8 is the most widely used algorithm to delineate drainage networks and catchments from a DEM. This algorithm has certain advantages such as simplicity, the provision of a reasonable representation for convergent flow conditions and consistency among flow patterns, calculated contributing areas and the spatial representation of subcatchments. However, it has limitations in selecting suitable flow accumulation threshold values to determine the pixels that belong to drainage networks. Although the effects of DEM resolution on some terrain attributes, stream characterisation and watershed delineation have been studied, analyses of the influence of DEM resolution on flow accumulation threshold values have been limited. Recently, multifractal analyses have been successfully used to find appropriate flow accumulation threshold values. The application of this type of analysis to evaluate the relationship between DEM resolution and flow accumulation threshold value needs to be explored. Therefore, this study tested three DEM resolutions for four drainage basins with different levels of drainage network distribution by comparing the Rényi spectra of the drainage networks that were obtained with the D8 algorithm against those determined by photogrammetric restitution. According to the results, DEM resolution influences the selected flow accumulation threshold value and the simulated network morphology. The suitable flow accumulation threshold value increases as the DEM resolution increases and shows greater variability for basins with lower drainage densities. The links between DEM resolution and terrain attributes were also examined.

  13. The Importance of Precise Digital Elevation Models (DEM) in Modelling Floods

    NASA Astrophysics Data System (ADS)

    Demir, Gokben; Akyurek, Zuhal

    2016-04-01

    Digital elevation Models (DEM) are important inputs for topography for the accurate modelling of floodplain hydrodynamics. Floodplains have a key role as natural retarding pools which attenuate flood waves and suppress flood peaks. GPS, LIDAR and bathymetric surveys are well known surveying methods to acquire topographic data. It is not only time consuming and expensive to obtain topographic data through surveying but also sometimes impossible for remote areas. In this study it is aimed to present the importance of accurate modelling of topography for flood modelling. The flood modelling for Samsun-Terme in Blacksea region of Turkey is done. One of the DEM is obtained from the point observations retrieved from 1/5000 scaled orthophotos and 1/1000 scaled point elevation data from field surveys at x-sections. The river banks are corrected by using the orthophotos and elevation values. This DEM is named as scaled DEM. The other DEM is obtained from bathymetric surveys. 296 538 number of points and the left/right bank slopes were used to construct the DEM having 1 m spatial resolution and this DEM is named as base DEM. Two DEMs were compared by using 27 x-sections. The maximum difference at thalweg of the river bed is 2m and the minimum difference is 20 cm between two DEMs. The channel conveyance capacity in base DEM is larger than the one in scaled DEM and floodplain is modelled in detail in base DEM. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. The model by using two DEMs were calibrated for a flood event (July 9, 2012). The roughness is considered as the calibration parameter. From comparison of input hydrograph at the upstream of the river and output hydrograph at the downstream of the river, the attenuation is obtained as 91% and 84% for the base DEM and scaled DEM, respectively. The time lag in hydrographs does not show any difference for two DEMs and it is obtained as 3 hours. Maximum flood extents differ for the two DEMs

  14. TES overlayed on MOLA DEM

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This image is TES thermal data (Orbit 222) overlayed on the MOLA DEM. The color scale is TES T18-T25, which is a cold spot index. The grey scale is MOLA elevation in kilometers. Most cold spots can be attributed to surface spectral emissivity effects. Regions that are colored black-violet-blue have near unity emissivity and are coarse grained CO2. Regions that are yellow-red are fined grained CO2. The red-white spot located approximately 300W85N is our most likely candidate for a CO2 snow storm.

  15. Evaluation Methods of The Text Entities

    ERIC Educational Resources Information Center

    Popa, Marius

    2006-01-01

    The paper highlights some evaluation methods to assess the quality characteristics of the text entities. The main concepts used in building and evaluation processes of the text entities are presented. Also, some aggregated metrics for orthogonality measurements are presented. The evaluation process for automatic evaluation of the text entities is…

  16. The Safeguards Evaluation Method for evaluating vulnerability to insider threats

    SciTech Connect

    Al-Ayat, R.A.; Judd, B.R.; Renis, T.A.

    1986-01-01

    As protection of DOE facilities against outsiders increases to acceptable levels, attention is shifting toward achieving comparable protection against insiders. Since threats and protection measures for insiders are substantially different from those for outsiders, new perspectives and approaches are needed. One such approach is the Safeguards Evaluation Method. This method helps in assessing safeguards vulnerabilities to theft or diversion of special nuclear material (SNM) by insiders. The Safeguards Evaluation Method-Insider Threat is a simple model that can be used by safeguards and security planners to evaluate safeguards and proposed upgrades at their own facilities. A discussion of the Safeguards Evaluation Method is presented in this paper.

  17. Wiederbeginn nach dem Zweiten Weltkrieg

    NASA Astrophysics Data System (ADS)

    Strecker, Heinrich; Bassenge-Strecker, Rosemarie

    Dieses Kapitel schildert zunächst die Ausgangslage für die Statistik in Deutschland nach dem Zweiten Weltkrieg: Der statistische Dienst in den Besatzungszonen musste teilweise erst aufgebaut und der statistische Unterricht an den Hochschulen wieder in Gang gebracht werden. In dieser Lage ergriff der Präsident des Bayerischen Statistischen Landesamtes, Karl Wagner, tatkräftig unterstützt von Gerhard Fürst, dem späteren Präsidenten des Statistischen Bundesamtes, die Initiative zur Neugründung der Deutschen Statistischen Gesellschaft (DStatG). Die Gründungsversammlung 1948 im München wurde zu einem Meilenstein in der Geschichte der DStatG. Ziel war es, alle Statistiker zur Zusammenarbeit anzuregen, ihre Qualifikation an das internationale Niveau heranzuführen und die Anwendung neuerer statistischer Methoden in der Praxis zu fördern. Es folgten 24 Jahre fruchtbarer Arbeit unter Karl Wagner (1948-1960) und Gerhard Fürst (1960-1972). Der Beitrag skizziert die Statistischen Wochen, die Tätigkeit der Ausschüsse und die Veröffentlichungen in dieser Zeit.

  18. An evaluation method for nanoscale wrinkle

    NASA Astrophysics Data System (ADS)

    Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.

    2016-06-01

    In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.

  19. TecDEM: A MATLAB based toolbox for tectonic geomorphology, Part 1: Drainage network preprocessing and stream profile analysis

    NASA Astrophysics Data System (ADS)

    Shahzad, Faisal; Gloaguen, Richard

    2011-02-01

    We present TecDEM, a software shell implemented in MATLAB that applies tectonic geomorphologic tasks to digital elevation models (DEMs). The first part of this paper series describes drainage partitioning schemes and stream profile analysis. The graphical user interface of TecDEM provides several options: determining flow directions, stream vectorization, watershed delineation, Strahler order labeling, stream profile generation, knickpoints selection, Concavity, Steepness and Hack indices calculations. The knickpoints along selected streams as well as stream profile analysis, and Hack index per stream profile are computed using a semi-automatic method. TecDEM was used to extract and investigate the stream profiles in the Kaghan Valley (Northern Pakistan). Our interpretations of the TecDEM results correlate well with previous tectonic evolution models for this region. TecDEM is designed to assist geoscientists in applying complex tectonic geomorphology tasks to global DEM data.

  20. Genetics | Selection: Evaluation and Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The procedures used for collecting and analyzing data for genetic improvement of dairy cattle are described. Methods of identification and milk recording are presented. Selection traits include production (milk, fat, and protein yields and component percentages), conformation (final score and linear...

  1. Shuttle radar DEM hydrological correction for erosion modelling in small catchments

    NASA Astrophysics Data System (ADS)

    Jarihani, Ben; Sidle, Roy; Bartley, Rebecca

    2016-04-01

    Digital Elevation Models (DEMs) that accurately replicate both landscape form and processes are critical to support modelling of environmental processes. Catchment and hillslope scale runoff and sediment processes (i.e., patterns of overland flow, infiltration, subsurface stormflow and erosion) are all topographically mediated. In remote and data-scarce regions, high resolution DEMs (LiDAR) are often not available, and moderate to course resolution digital elevation models (e.g., SRTM) have difficulty replicating detailed hydrological patterns, especially in relatively flat landscapes. Several surface reconditioning algorithms (e.g., Smoothing) and "Stream burning" techniques (e.g., Agree or ANUDEM), in conjunction with representation of the known stream networks, have been used to improve DEM performance in replicating known hydrology. Detailed stream network data are not available at regional and national scales, but can be derived at local scales from remotely-sensed data. This research explores the implication of high resolution stream network data derived from Google Earth images for DEM hydrological correction, instead of using course resolution stream networks derived from topographic maps. The accuracy of implemented method in producing hydrological-efficient DEMs were assessed by comparing the hydrological parameters derived from modified DEMs and limited high-resolution airborne LiDAR DEMs. The degree of modification is dominated by the method used and availability of the stream network data. Although stream burning techniques improve DEMs hydrologically, these techniques alter DEM characteristics that may affect catchment boundaries, stream position and length, as well as secondary terrain derivatives (e.g., slope, aspect). Modification of a DEM to better reflect known hydrology can be useful, however, knowledge of the magnitude and spatial pattern of the changes are required before using a DEM for subsequent analyses.

  2. Methods of evaluating hair growth.

    PubMed

    Chamberlain, Alexander J; Dawber, Rodney P R

    2003-02-01

    For decades, scientists and clinicians have examined methods of measuring scalp hair growth. With the development of drugs that stem or even reverse the miniaturization of androgenetic alopecia, there has been a greater need for reliable, economical and minimally invasive means of measuring hair growth and, specifically, response to therapy. We review the various methods of measurement described to date, their limitations and value to the clinician. In our opinion, the potential of computer-assisted technology in this field is yet to be maximized and the currently available tools are less than ideal. The most valuable means of measurement at the present time are global photography and phototrichogram-based techniques (with digital image analysis) such as the 'TrichoScan'. Subjective scoring systems are also of value in the overall assessment of response to therapy and these are under-utilized and merit further refinement. PMID:12581076

  3. Evaluation of Rhenium Joining Methods

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.; Morren, Sybil H.

    1995-01-01

    Coupons of rhenium-to-Cl03 flat plate joints, formed by explosive and diffusion bonding, were evaluated in a series of shear tests. Shear testing was conducted on as-received, thermally-cycled (100 cycles, from 21 to 1100 C), and thermally-aged (3 and 6 hrs at 1100 C) joint coupons. Shear tests were also conducted on joint coupons with rhenium and/or Cl03 electron beam welded tabs to simulate the joint's incorporation into a structure. Ultimate shear strength was used as a figure of merit to assess the effects of the thermal treatment and the electron beam welding of tabs on the joint coupons. All of the coupons survived thermal testing intact and without any visible degradation. Two different lots of as-received, explosively-bonded joint coupons had ultimate shear strengths of 281 and 310 MPa and 162 and 223 MPa, respectively. As-received, diffusion-bonded coupons had ultimate shear strengths of 199 and 348 MPa. For the most part, the thermally-treated and rhenium weld tab coupons had shear strengths slightly reduced or within the range of the as-received values. Coupons with Cl03 weld tabs experienced a significant reduction in shear strength. The degradation of strength appeared to be the result of a poor heat sink provided during the electron beam welding. The Cl03 base material could not dissipate heat as effectively as rhenium, leading to the formation of a brittle rhenium-niobium intermetallic.

  4. Methods of Generating and Evaluating Hypertext.

    ERIC Educational Resources Information Center

    Blustein, James; Staveley, Mark S.

    2001-01-01

    Focuses on methods of generating and evaluating hypertext. Highlights include historical landmarks; nonlinearity; literary hypertext; models of hypertext; manual, automatic, and semi-automatic generation of hypertext; mathematical models for hypertext evaluation, including computing coverage and correlation; human factors in evaluation; and…

  5. Evaluation Methods for Intelligent Tutoring Systems Revisited

    ERIC Educational Resources Information Center

    Greer, Jim; Mark, Mary

    2016-01-01

    The 1993 paper in "IJAIED" on evaluation methods for Intelligent Tutoring Systems (ITS) still holds up well today. Basic evaluation techniques described in that paper remain in use. Approaches such as kappa scores, simulated learners and learning curves are refinements on past evaluation techniques. New approaches have also arisen, in…

  6. Evaluation Measures and Methods: Some Intersections.

    ERIC Educational Resources Information Center

    Elliott, John

    The literature is reviewed for four combinations of evaluation measures and methods: traditional methods with traditional measures (T-Meth/T-Mea), nontraditional methods with traditional measures (N-Meth/T-Mea), traditional measures with nontraditional measures (T-Meth/N-Mea), and nontraditional methods with nontraditional measures (N-Meth/N-Mea).…

  7. Safeguards Evaluation Method for evaluating vulnerability to insider threats

    SciTech Connect

    Al-Ayat, R.A.; Judd, B.R.; Renis, T.A.

    1986-01-01

    As protection of DOE facilities against outsiders increases to acceptable levels, attention is shifting toward achieving comparable protection against insiders. Since threats and protection measures for insiders are substantially different from those for outsiders, new perspectives and approaches are needed. One such approach is the Safeguards Evaluation Method. This method helps in assessing safeguards vulnerabilities to theft or diversion of special nuclear meterial (SNM) by insiders. The Safeguards Evaluation Method-Insider Threat is a simple model that can be used by safeguards and security planners to evaluate safeguards and proposed upgrades at their own facilities. The method is used to evaluate the effectiveness of safeguards in both timely detection (in time to prevent theft) and late detection (after-the-fact). The method considers the various types of potential insider adversaries working alone or in collusion with other insiders. The approach can be used for a wide variety of facilities with various quantities and forms of SNM. An Evaluation Workbook provides documentation of the baseline assessment; this simplifies subsequent on-site appraisals. Quantitative evaluation is facilitated by an accompanying computer program. The method significantly increases an evaluation team's on-site analytical capabilities, thereby producing a more thorough and accurate safeguards evaluation.

  8. Hair Evaluation Methods: Merits and Demerits

    PubMed Central

    Dhurat, Rachita; Saraogi, Punit

    2009-01-01

    Various methods are available for evaluation (for diagnosis and/or quantification) of a patient presenting with hair loss. Hair evaluation methods are grouped into three main categories: Non-invasive methods (e.g., questionnaire, daily hair counts, standardized wash test, 60-s hair count, global photographs, dermoscopy, hair weight, contrasting felt examination, phototrichogram, TrichoScan and polarizing and surface electron microscopy), semi-invasive methods (e.g., trichogram and unit area trichogram) and invasive methods (e.g., scalp biopsy). Any single method is neither 'ideal' nor feasible. However, when interpreted with caution, these are valuable tools for patient diagnosis and monitoring. Daily hair counts, wash test, etc. are good methods for primary evaluation of the patient and to get an approximate assessment of the amount of shedding. Some methods like global photography form an important part of any hair clinic. Analytical methods like phototrichogram are usually possible only in the setting of a clinical trial. Many of these methods (like the scalp biopsy) require expertise for both processing and interpreting. We reviewed the available literature in detail in light of merits and demerits of each method. A plethora of newer methods is being introduced, which are relevant to the cosmetic industry/research. Such methods as well as metabolic/hormonal evaluation are not included in this review. PMID:20927232

  9. TanDEM-X high resolution DEMs and their applications to flow modeling

    NASA Astrophysics Data System (ADS)

    Wooten, Kelly M.

    Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.

  10. How does modifying a DEM to reflect known hydrology affect subsequent terrain analysis?

    NASA Astrophysics Data System (ADS)

    Callow, John Nikolaus; Van Niel, Kimberly P.; Boggs, Guy S.

    2007-01-01

    SummaryMany digital elevation models (DEMs) have difficulty replicating hydrological patterns in flat landscapes. Efforts to improve DEM performance in replicating known hydrology have included a variety of soft (i.e. algorithm-based approaches) and hard techniques, such as " Stream burning" or "surface reconditioning" (e.g. Agree or ANUDEM). Using a representation of the known stream network, these methods trench or mathematically warp the original DEM to improve how accurately stream position, stream length and catchment boundaries replicate known hydrological conditions. However, these techniques permanently alter the DEM and may affect further analyses (e.g. slope). This paper explores the impact that commonly used hydrological correction methods ( Stream burning, Agree.aml and ANUDEM v4.6.3 and ANUDEM v5.1) have on the overall nature of a DEM, finding that different methods produce non-convergent outcomes for catchment parameters (such as catchment boundaries, stream position and length), and differentially compromise secondary terrain analysis. All hydrological correction methods successfully improved calculation of catchment area, stream position and length as compared to using the DEM without any modification, but they all increased catchment slope. No single method performing best across all categories. Different hydrological correction methods changed elevation and slope in different spatial patterns and magnitudes, compromising the ability to derive catchment parameters and conduct secondary terrain analysis from a single DEM. Modification of a DEM to better reflect known hydrology can be useful, however knowledge of the magnitude and spatial pattern of the changes are required before using a DEM for subsequent analyses.

  11. Statistical Morphometry of Small Martian Craters: New Methods and Results

    NASA Astrophysics Data System (ADS)

    Watters, W. A.; Geiger, L.; Fendrock, M.; Gibson, R.; Radford, A.

    2015-05-01

    Methods for automatic morphometric characterization of craters for large statistical studies; measured dependence of shape on size, terrain, modification, and velocity (via primary-to-secondary distance); evaluation of Ames Stereo Pipeline DEMs.

  12. Integration of 2-D hydraulic model and high-resolution LiDAR-derived DEM for floodplain flow modeling

    NASA Astrophysics Data System (ADS)

    Shen, D.; Wang, J.; Cheng, X.; Rui, Y.; Ye, S.

    2015-02-01

    The rapid progress of Light Detection And Ranging (LiDAR) technology has made acquirement and application of high-resolution digital elevation model (DEM) data increasingly popular, especially with regards to the study of floodplain flow modeling. High-resolution DEM data include many redundant interpolation points, needs a high amount of calculation, and does not match the size of computational mesh. These disadvantages are a common problem for floodplain flow modeling studies. Two-dimensional (2-D) hydraulic modeling, a popular method of analyzing floodplain flow, offers high precision of elevation parameterization for computational mesh while ignoring much micro-topographic information of the DEM data itself. We offer a flood simulation method that integrates 2-D hydraulic model results and high-resolution DEM data, enabling the calculation of flood water levels in DEM grid cells through local inverse distance weighted interpolation. To get rid of the false inundation areas during interpolation, it employs the run-length encoding method to mark the inundated DEM grid cells and determine the real inundation areas through the run-length boundary tracing technique, which solves the complicated problem of the connectivity between DEM grid cells. We constructed a 2-D hydraulic model for the Gongshuangcha polder, a flood storage area of Dongting Lake, using our integrated method to simulate the floodplain flow. The results demonstrate that this method can solve DEM associated problems efficiently and simulate flooding processes with greater accuracy than DEM only simulations.

  13. Icesat Validation of Tandem-X I-Dems Over the UK

    NASA Astrophysics Data System (ADS)

    Feng, L.; Muller, J.-P.

    2016-06-01

    From the latest TanDEM-X mission (bistatic X-Band interferometric SAR), globally consistent Digital Elevation Model (DEM) will be available from 2017, but their accuracy has not yet been fully characterised. This paper presents the methods and implementation of statistical procedures for the validation of the vertical accuracy of TanDEM-X iDEMs at grid-spacing of approximately 12.5 m, 30 m and 90 m based on processed ICESat data over the UK in order to assess their potential extrapolation across the globe. The accuracy of the TanDEM-X iDEM in UK was obtained as follows: against ICESat GLA14 elevation data, TanDEM-X iDEM has -0.028±3.654 m over England and Wales and 0.316 ± 5.286 m over Scotland for 12 m, -0.073 ± 6.575 m for 30 m, and 0.0225 ± 9.251 m at 90 m. Moreover, 90 % of all results at the three resolutions of TanDEM-X iDEM data (with a linear error at 90 % confidence level) are below 16.2 m. These validation results also indicate that derivative topographic parameters (slope, aspect and relief) have a strong effect on the vertical accuracy of the TanDEM-X iDEMs. In high-relief and large slope terrain, large errors and data voids are frequent, and their location is strongly influenced by topography, whilst in the low- to medium-relief and low slope sites, errors are smaller. ICESat derived elevations are heavily influenced by surface slope within the 70 m footprint as well as there being slope dependent errors in the TanDEM-X iDEMs.

  14. An Operator Method for Evaluating Laplace Transforms

    ERIC Educational Resources Information Center

    Lanoue, B. G.; Yurekli, O.

    2005-01-01

    This note discusses a simple operator technique based on the differentiation and shifting properties of the Laplace transform to find Laplace transforms for various elementary functions. The method is simpler than known integration techniques to evaluate Laplace transforms.

  15. [Evaluation methods of HME with tracheostomized patients].

    PubMed

    Li, Min

    2014-03-01

    This paper introduced the measurement methods of heat and moisture exchanger during tracheotomy with two main parameters (water loss and pressure drop) and proposed more heat and moisture exchanger evaluation indicators such as the death chamber, the heat exchange rate, as well as those parameters can be used to evaluate the reasonableness of the heat and moisture exchanger performance. PMID:24941781

  16. Creating Alternative Methods for Educational Evaluation.

    ERIC Educational Resources Information Center

    Smith, Nick L.

    1981-01-01

    A project supported by the National Institute of Education is adapting evaluation procedures from such areas as philosophy, geography, operations research, journalism, film criticism, and other areas. The need for such methods is reviewed, as is the context in which they function, and their contributions to evaluation methodology. (Author/GK)

  17. DEVELOPMENT AND EVALUATION OF COMPOSITE RECEPTOR METHODS

    EPA Science Inventory

    A composite receptor method for PM-10 apportionment was evaluated to determine the stability of its solutions and to devise cost-effective measurement strategies. Aerosol samples used in the evaluation were collected during summer, 1982, by dichotomous samplers at three sites in ...

  18. Mapping debris-flow hazard in Honolulu using a DEM

    USGS Publications Warehouse

    Ellen, Stephen D.; Mark, Robert K.

    1993-01-01

    A method for mapping hazard posed by debris flows has been developed and applied to an area near Honolulu, Hawaii. The method uses studies of past debris flows to characterize sites of initiation, volume at initiation, and volume-change behavior during flow. Digital simulations of debris flows based on these characteristics are then routed through a digital elevation model (DEM) to estimate degree of hazard over the area.

  19. DEM simulation of oblique boudinage

    NASA Astrophysics Data System (ADS)

    Komoroczi, Andrea; Abe, Steffen; Urai, Janos L.

    2013-04-01

    Boudinage occurs in mechanically layered rocks if there is a component of lengthening parallel to a brittle layer in a ductile matrix. Asymmetric boudin structures develop if the extension is not layer-parallel, and the boudin blocks rotate. The amount of block rotation is commonly used as shear indicators; therefore, it has been well studied. However, full oblique boudinage has not been modeled yet. We simulated full boudinage processes during layer oblique extension using DEM simulation software. In our boudinage model, the initial setup consists of three layers: there is a brittle center oblique layer in a ductile matrix. We simulated horizontal extension by applying vertical displacement: the top and bottom boundaries of the model are moved at a constant velocity, while the side boundaries were force controlled by applying a constant confining force. By varying the cohesion of the competent layer, various type and shape of boudin blocks were developed. By varying the angle of the competent layer, the rotation of the boudin blocks changed. With higher dip of the competent layer, the rotation of the boudin blocks is more consistent. We also studied the stress field during the simulation. The results show, that in case of ductile material, the disruptions of the layer are driven by the angle of the layer and not the orientation of the external stress field.

  20. Bathymetric survey of water reservoirs in north-eastern Brazil based on TanDEM-X satellite data.

    PubMed

    Zhang, Shuping; Foerster, Saskia; Medeiros, Pedro; de Araújo, José Carlos; Motagh, Mahdi; Waske, Bjoern

    2016-11-15

    Water scarcity in the dry season is a vital problem in dryland regions such as northeastern Brazil. Water supplies in these areas often come from numerous reservoirs of various sizes. However, inventory data for these reservoirs is often limited due to the expense and time required for their acquisition via field surveys, particularly in remote areas. Remote sensing techniques provide a valuable alternative to conventional reservoir bathymetric surveys for water resource management. In this study single pass TanDEM-X data acquired in bistatic mode were used to generate digital elevation models (DEMs) in the Madalena catchment, northeastern Brazil. Validation with differential global positioning system (DGPS) data from field measurements indicated an absolute elevation accuracy of approximately 1m for the TanDEM-X derived DEMs (TDX DEMs). The DEMs derived from TanDEM-X data acquired at low water levels show significant advantages over bathymetric maps derived from field survey, particularly with regard to coverage, evenly distributed measurements and replication of reservoir shape. Furthermore, by mapping the dry reservoir bottoms with TanDEM-X data, TDX DEMs are free of emergent and submerged macrophytes, independent of water depth (e.g. >10m), water quality and even weather conditions. Thus, the method is superior to other existing bathymetric mapping approaches, particularly for inland water bodies. The proposed approach relies on (nearly) dry reservoir conditions at times of image acquisition and is thus restricted to areas that show considerable water levels variations. However, comparisons between TDX DEM and the bathymetric map derived from field surveys show that the amount of water retained during the dry phase has only marginal impact on the total water volume derivation from TDX DEM. Overall, DEMs generated from bistatic TanDEM-X data acquired in low water periods constitute a useful and efficient data source for deriving reservoir bathymetry and show

  1. Fusion of high-resolution DEMs derived from COSMO-SkyMed and TerraSAR-X InSAR datasets

    NASA Astrophysics Data System (ADS)

    Jiang, Houjun; Zhang, Lu; Wang, Yong; Liao, Mingsheng

    2014-06-01

    Voids caused by shadow, layover, and decorrelation usually occur in digital elevation models (DEMs) of mountainous areas that are derived from interferometric synthetic aperture radar (InSAR) datasets. The presence of voids degrades the quality and usability of the DEMs. Thus, void removal is considered as an integral part of the DEM production using InSAR data. The fusion of multiple DEMs has been widely recognized as a promising way for the void removal. Because the vertical accuracy of multiple DEMs can be different, the selection of optimum weights becomes a key problem in the fusion and is studied in this article. As a showcase, two high-resolution InSAR DEMs near Mt. Qilian in northwest China are created and then merged. The two pairs of InSAR data were acquired by TerraSAR-X from an ascending orbit and COSMO-SkyMed from a descending orbit. A maximum likelihood fusion scheme with the weights optimally determined by the height of ambiguity and the variance of phase noise is adopted to syncretize the two DEMs in our study. The fused DEM has a fine spatial resolution of 10 m and depicts the landform of the study area well. The percentage of void cells in the fused DEM is only 0.13 %, while 6.9 and 5.7 % of the cells in the COSMO-SkyMed DEM and the TerraSAR-X DEM are originally voids. Using the ICESat/GLAS elevation data and the Chinese national DEM of scale 1:50,000 as references, we evaluate vertical accuracy levels of the fused DEM as well as the original InSAR DEMs. The results show that substantial improvements could be achieved by DEM fusion after atmospheric phase screen removal. The quality of fused DEM can even meet the high-resolution terrain information (HRTI) standard.

  2. MARE2DEM: an open-source code for anisotropic inversion of controlled-source electromagnetic and magnetotelluric data using parallel adaptive 2D finite elements (Invited)

    NASA Astrophysics Data System (ADS)

    Key, K.

    2013-12-01

    This work announces the public release of an open-source inversion code named MARE2DEM (Modeling with Adaptively Refined Elements for 2D Electromagnetics). Although initially designed for the rapid inversion of marine electromagnetic data, MARE2DEM now supports a wide variety of acquisition configurations for both offshore and onshore surveys that utilize electric and magnetic dipole transmitters or magnetotelluric plane waves. The model domain is flexibly parameterized using a grid of arbitrarily shaped polygonal regions, allowing for complicated structures such as topography or seismically imaged horizons to be easily assimilated. MARE2DEM efficiently solves the forward problem in parallel by dividing the input data parameters into smaller subsets using a parallel data decomposition algorithm. The data subsets are then solved in parallel using an automatic adaptive finite element method that iterative solves the forward problem on successively refined finite element meshes until a specified accuracy tolerance is met, thus freeing the end user from the burden of designing an accurate numerical modeling grid. Regularized non-linear inversion for isotropic or anisotropic conductivity is accomplished with a new implementation of Occam's method referred to as fast-Occam, which is able to minimize the objective function in much fewer forward evaluations than the required by the original method. This presentation will review the theoretical considerations behind MARE2DEM and use a few recent offshore EM data sets to demonstrate its capabilities and to showcase the software interface tools that streamline model building and data inversion.

  3. AN EVALUATION STUDY OF EPA METHOD 8

    EPA Science Inventory

    Techniques used in EPA Method 8, the source test method for acid mist and sulfur dioxide emissions from sulfuric acid plants, have been evaluated. Evidence is shown that trace amounts of peroxides in isopropyl alcohol result in the conversion of sulfur dioxide to sulfate and caus...

  4. Evaluation of Sight, Sound, Symbol Instructional Method.

    ERIC Educational Resources Information Center

    Massarotti, Michael C.; Slaichert, William M.

    Evaluated was the Sight-Sound-Symbol (S-S-S) method of teaching basic reading skills with four groups of 16 trainable mentally retarded children. The method involved use of a musical keyboard to teach children to identify numbers, letters, colors, and shapes. Groups either received individual S-S-S instruction for 10 minutes daily, received S-S-S…

  5. Assessment and Evaluation Methods for Access Services

    ERIC Educational Resources Information Center

    Long, Dallas

    2014-01-01

    This article serves as a primer for assessment and evaluation design by describing the range of methods commonly employed in library settings. Quantitative methods, such as counting and benchmarking measures, are useful for investigating the internal operations of an access services department in order to identify workflow inefficiencies or…

  6. Creating High Quality DEMs of Large Scale Fluvial Environments Using Structure-from-Motion

    NASA Astrophysics Data System (ADS)

    Javernick, L. A.; Brasington, J.; Caruso, B. S.; Hicks, M.; Davies, T. R.

    2012-12-01

    During the past decade, advances in survey and sensor technology have generated new opportunities to investigate the structure and dynamics of fluvial systems. Key geomatic technologies include the Global Positioning System (GPS), digital photogrammetry, LiDAR, and terrestrial laser scanning (TLS). The application of such has resulted in a profound increase in the dimensionality of topographic surveys - from cross-sections to distributed 3d point clouds and digital elevation models (DEMs). Each of these technologies have been used successfully to derive high quality DEMs of fluvial environments; however, they often require specialized and expensive equipment, such as a TLS or large format camera, bespoke platforms such as survey aircraft, and consequently make data acquisition prohibitively expensive or highly labour intensive, thus restricting the extent and frequency of surveys. Recently, advances in computer vision and image analysis have led to development of a novel photogrammetric approach that is fully automated and suitable for use with simple compact (non-metric) cameras. In this paper, we evaluate a new photogrammetric method, Structure-from-Motion (SfM), and demonstrate how this can be used to generate DEMs of comparable quality to airborne LiDAR, using consumer grade cameras at low costs. Using the SfM software PhotoScan (version 0.8.5), high quality DEMs were produced for a 1.6 km reach and a 3.3 km reach of the braided Ahuriri River, New Zealand. Photographs used for DEM creation were acquired from a helicopter flying at 600 m and 800 m above ground level using a consumer grade 10.1mega-pixel, non-metric digital camera, resulting in object space resolution imagery of 0.12 m and 0.16 m respectively. Point clouds for the two study reaches were generated using 147 and 224 photographs respectively, and were extracted automatically in an arbitrary coordinate system; RTK-GPS located ground control points (GCPs) were used to define a 3d non

  7. An efficient method to evaluate energy variances for extrapolation methods

    NASA Astrophysics Data System (ADS)

    Puddu, G.

    2012-08-01

    The energy variance extrapolation method consists of relating the approximate energies in many-body calculations to the corresponding energy variances and inferring eigenvalues by extrapolating to zero variance. The method needs a fast evaluation of the energy variances. For many-body methods that expand the nuclear wavefunctions in terms of deformed Slater determinants, the best available method for the evaluation of energy variances scales with the sixth power of the number of single-particle states. We propose a new method which depends on the number of single-particle orbits and the number of particles rather than the number of single-particle states. We discuss as an example the case of 4He using the chiral N3LO interaction in a basis consisting up to 184 single-particle states.

  8. An assessment of TanDEM-X GlobalDEM over rural and urban areas

    NASA Astrophysics Data System (ADS)

    Koudogbo, Fifamè N.; Duro, Javier; Huber, Martin; Rudari, Roberto; Eddy, Andrew; Lucas, Richard

    2014-10-01

    Digital Elevation Model (DEM) is a key input for the development of risk management systems. Main limitation of the current available DEM is the low level of resolution. DEMs such as STRM 90m or ASTER are globally available free of charge, but offer limited use, for example, to flood modelers in most geographic areas. TanDEM-X (TerraSAR-X add-on for Digital Elevation Measurement), the first bistatic SAR can fulfil this gap. The mission objective is the generation of a consistent global digital elevation model with an unprecedented accuracy according to the HRTI-3 (High Resolution Terrain Information) specifications. The mission opens a new era in risk assessment. In the framework of ALTAMIRA INFORMATION research activities, the DIAPASON (Differential Interferometric Automated Process Applied to Survey Of Nature) processing chain has been successfully adapted to TanDEM-X CoSSC (Coregistered Slant Range Single Look Complex) data processing. In this study the capability of CoSSC data for DEM generation is investigated. Within the on-going FP7 RASOR project (Rapid Analysis and Spatialisation and Of Risk), the generated DEM are compared with Intermediate DEM derived from the TanDEM-X first global coverage. The results are presented and discussed.

  9. Further evaluation of traditional icing scaling methods

    NASA Technical Reports Server (NTRS)

    Anderson, David N.

    1996-01-01

    This report provides additional evaluations of two methods to scale icing test conditions; it also describes a hybrid technique for use when scaled conditions are outside the operating envelope of the test facility. The first evaluation is of the Olsen method which can be used to scale the liquid-water content in icing tests, and the second is the AEDC (Ruff) method which is used when the test model is less than full size. Equations for both scaling methods are presented in the paper, and the methods were evaluated by performing icing tests in the NASA Lewis Icing Research Tunnel (IRT). The Olsen method was tested using 53 cm diameter NACA 0012 airfoils. Tests covered liquid-water-contents which varied by as much as a factor of 1.8. The Olsen method was generally effective in giving scale ice shapes which matched the reference shapes for these tests. The AEDC method was tested with NACA 0012 airfoils with chords from 18 cm to 53 cm. The 53 cm chord airfoils were used in reference tests, and 1/2 and 1/3 scale tests were made at conditions determined by applying the AEDC scaling method. The scale and reference airspeeds were matched in these tests. The AEDC method was found to provide fairly effective scaling for 1/2 size tests, but for 1/3 size models, scaling was generally less effective. In addition to these two scaling methods, a hybrid approach was also tested in which the Olsen method was used to adjust the LWC after size was scaled using the constant Weber number method. This approach was found to be an effective way to test when scaled conditions would otherwise be outside the capability of the test facility.

  10. Assessment of Uncertainty Propagation from DEM's on Small Scale Typologically-Differentiated Landslide Susceptibility in Romania

    NASA Astrophysics Data System (ADS)

    Cosmin Sandric, Ionut; Chitu, Zenaida; Jurchescu, Marta; Malet, Jean-Philippe; Ciprian Margarint, Mihai; Micu, Mihai

    2015-04-01

    An increasing number of free and open access global digital elevation models has become available in the past 15 years and these DEMs have been widely used for the assessment of landslide susceptibility at medium and small scales. Even though the global vertical and horizontal accuracies of each DEM are known, what it is still unknown is the uncertainty that propagates from the first and second derivatives of DEMs, like slope gradient, into the final landslide susceptibility map For the present study we focused on the assessment of the uncertainty propagation from the following digital elevation models: SRTM 90m spatial resolution, ASTERDEM 30m spatial resolution, EUDEM 30m spatial resolution and the latest release SRTM 30m spatial resolution. From each DEM dataset the slope gradient was generated and used in the landslide susceptibility analysis. A restricted number of spatial predictors are used for landslide susceptibility assessment, represented by lithology, land-cover and slope, were the slope is the only predictor that changes with each DEM. The study makes use of the first national landslide inventory (Micu et al, 2014) obtained from compiling literature data, personal or institutional landslide inventories. The landslide inventory contains more than 27,900 cases classified in three main categories: slides flows and falls The results present landslide susceptibility maps obtained from each DEM and from the combinations of DEM datasets. Maps with uncertainty propagation at country level and differentiated by topographic regions from Romania and by landslide typology (slides, flows and falls) are obtained for each DEM dataset and for the combinations of these. An objective evaluation of each DEM dataset and a final map of landslide susceptibility and the associated uncertainty are provided

  11. APS Removal And Void Filling For DEM Reconstruction From High-Resolution INSAR Data

    NASA Astrophysics Data System (ADS)

    Liao, Mingsheng; Jiang, Houjun; Wang, Teng; Zhang, Lu

    2012-01-01

    The quality and accuracy of DEMs derived from repeat- pass InSAR is limited by atmospheric phase screen (APS) difference and decorrelation between SAR images. In this paper, we show a compromising but effective approach to avoid DEM gaps and remove height errors induced by the atmosphere. Existing low resolution DEMs are used as external data to improve the quality of interferometric DEM. Our approach focuses on two aspects: 1) Estimate the APS from a differential interferogram with a low-pass filter in the frequency domain, and remove the height errors caused by APS. 2) Fill data voids and calibrate the height with an external DEM. The proposed method has been applied on high-resolution COSMO-SkyMed Tandem data with one-day temporal baseline over Mt. Qilian in north-western China. The resultant DEM has been validated in comparison with an officially-issued 1:50,000 DEM. Our preliminary result shows that atmospheric artifacts and data voids have been removed effectively.

  12. Graphical methods for evaluating covering arrays

    SciTech Connect

    Kim, Youngil; Jang, Dae -Heung; Anderson-Cook, Christine M.

    2015-08-10

    Covering arrays relax the condition of orthogonal arrays by only requiring that all combination of levels be covered but not requiring that the appearance of all combination of levels be balanced. This allows for a much larger number of factors to be simultaneously considered but at the cost of poorer estimation of the factor effects. To better understand patterns between sets of columns and evaluate the degree of coverage to compare and select between alternative arrays, we suggest several new graphical methods that show some of the patterns of coverage for different designs. As a result, these graphical methods for evaluating covering arrays are illustrated with a few examples.

  13. Energy efficiency assessment methods and tools evaluation

    SciTech Connect

    McMordie, K.L.; Richman, E.E.; Keller, J.M.; Dixon, D.R.

    1994-08-01

    Many different methods of assessing the energy savings potential at federal installations, and identifying attractive projects for capital investment have been used by the different federal agencies. These methods range from high-level estimating tools to detailed design tools, both manual and software assisted. These methods have different purposes and provide results that are used for different parts of the project identification, and implementation process. Seven different assessment methods are evaluated in this study. These methods were selected by the program managers at the DoD Energy Policy Office, and DOE Federal Energy Management Program (FEMP). Each of the methods was applied to similar buildings at Bolling Air Force Base (AFB), unless it was inappropriate or the method was designed to make an installation-wide analysis, rather than focusing on particular buildings. Staff at Bolling AFB controlled the collection of data.

  14. Veterinary and human vaccine evaluation methods

    PubMed Central

    Knight-Jones, T. J. D.; Edmond, K.; Gubbins, S.; Paton, D. J.

    2014-01-01

    Despite the universal importance of vaccines, approaches to human and veterinary vaccine evaluation differ markedly. For human vaccines, vaccine efficacy is the proportion of vaccinated individuals protected by the vaccine against a defined outcome under ideal conditions, whereas for veterinary vaccines the term is used for a range of measures of vaccine protection. The evaluation of vaccine effectiveness, vaccine protection assessed under routine programme conditions, is largely limited to human vaccines. Challenge studies under controlled conditions and sero-conversion studies are widely used when evaluating veterinary vaccines, whereas human vaccines are generally evaluated in terms of protection against natural challenge assessed in trials or post-marketing observational studies. Although challenge studies provide a standardized platform on which to compare different vaccines, they do not capture the variation that occurs under field conditions. Field studies of vaccine effectiveness are needed to assess the performance of a vaccination programme. However, if vaccination is performed without central co-ordination, as is often the case for veterinary vaccines, evaluation will be limited. This paper reviews approaches to veterinary vaccine evaluation in comparison to evaluation methods used for human vaccines. Foot-and-mouth disease has been used to illustrate the veterinary approach. Recommendations are made for standardization of terminology and for rigorous evaluation of veterinary vaccines. PMID:24741009

  15. Improving the TanDEM-X DEM for flood modelling using flood extents from Synthetic Aperture Radar images.

    NASA Astrophysics Data System (ADS)

    Mason, David; Trigg, Mark; Garcia-Pintado, Javier; Cloke, Hannah; Neal, Jeffrey; Bates, Paul

    2015-04-01

    Many floodplains in the developed world have now been imaged with high resolution airborne LiDAR or InSAR, giving accurate DEMs that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X World DEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution SAR images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. The paper discusses an additional use of SAR flood extents to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving the DEM for future flood modelling studies in this area. The method is based on the fact that for larger rivers the water elevation changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as a sample of heights with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate height estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the refined heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must be no lower than the refined heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the

  16. Scenario-Based Validation of Moderate Resolution DEMs Freely Available for Complex Himalayan Terrain

    NASA Astrophysics Data System (ADS)

    Singh, Mritunjay Kumar; Gupta, R. D.; Snehmani; Bhardwaj, Anshuman; Ganju, Ashwagosha

    2016-02-01

    Accuracy of the Digital Elevation Model (DEM) affects the accuracy of various geoscience and environmental modelling results. This study evaluates accuracies of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM Version-2 (GDEM V2), the Shuttle Radar Topography Mission (SRTM) X-band DEM and the NRSC Cartosat-1 DEM V1 (CartoDEM). A high resolution (1 m) photogrammetric DEM (ADS80 DEM), having a high absolute accuracy [1.60 m linear error at 90 % confidence (LE90)], resampled at 30 m cell size was used as reference. The overall root mean square error (RMSE) in vertical accuracy was 23, 73, and 166 m and the LE90 was 36, 75, and 256 m for ASTER GDEM V2, SRTM X-band DEM and CartoDEM, respectively. A detailed error analysis was performed for individual as well as combinations of different classes of aspect, slope, land-cover and elevation zones for the study area. For the ASTER GDEM V2, forest areas with North facing slopes (0°-5°) in the 4th elevation zone (3773-4369 m) showed minimum LE90 of 0.99 m, and barren with East facing slopes (>60°) falling under the 2nd elevation zone (2581-3177 m) showed maximum LE90 of 166 m. For the SRTM DEM, pixels with South-East facing slopes of 0°-5° in the 4th elevation zone covered with forest showed least LE90 of 0.33 m and maximum LE90 of 521 m was observed in the barren area with North-East facing slope (>60°) in the 4th elevation zone. In case of the CartoDEM, the snow pixels in the 2nd elevation zone with South-East facing slopes of 5°-15° showed least LE90 of 0.71 m and maximum LE90 of 1266 m was observed for the snow pixels in the 3rd elevation zone (3177-3773 m) within the South facing slope of 45°-60°. These results can be highly useful for the researchers using DEM products in various modelling exercises.

  17. A description of rotations for DEM models of particle systems

    NASA Astrophysics Data System (ADS)

    Campello, Eduardo M. B.

    2015-06-01

    In this work, we show how a vector parameterization of rotations can be adopted to describe the rotational motion of particles within the framework of the discrete element method (DEM). It is based on the use of a special rotation vector, called Rodrigues rotation vector, and accounts for finite rotations in a fully exact manner. The use of fictitious entities such as quaternions or complicated structures such as Euler angles is thereby circumvented. As an additional advantage, stick-slip friction models with inter-particle rolling motion are made possible in a consistent and elegant way. A few examples are provided to illustrate the applicability of the scheme. We believe that simple vector descriptions of rotations are very useful for DEM models of particle systems.

  18. Influence of the external DEM on PS-InSAR processing and results on Northern Appennine slopes

    NASA Astrophysics Data System (ADS)

    Bayer, B.; Schmidt, D. A.; Simoni, A.

    2014-12-01

    We present an InSAR analysis of slow moving landslide in the Northern Appennines, Italy, and assess the dependencies on the choice of DEM. In recent years, advanced processing techniques for synthetic aperture radar interferometry (InSAR) have been applied to measure slope movements. The persistent scatterers (PS-InSAR) approach is probably the most widely used and some codes are now available in the public domain. The Stanford method of Persistent Scatterers (StamPS) has been successfully used to analyze landslide areas. One problematic step in the processing chain is the choice of an external DEM that is used to model and remove the topographic phase in a series of interferograms in order to obtain the phase contribution caused by surface deformation. The choice is not trivial, because the PS InSAR results differ significantly in terms of PS identification, positioning, and the resulting deformation signal. We use four different DEMs to process a set of 18 ASAR (Envisat) scenes over a mountain area (~350 km2) of the Northern Appennines of Italy, using StamPS. Slow-moving landslides control the evolution of the landscape and cover approximately 30% of the territory. Our focus in this presentation is to evaluate the influence of DEM resolution and accuracy by comparing PS-InSAR results. On an areal basis, we perform a statistical analysis of displacement time-series to make the comparison. We also consider two case studies to illustrate the differences in terms of PS identification, number and estimated displacements. It is clearly shown that DEM accuracy positively influences the number of PS, while line-of-sight rates differ from case to case and can result in deformation signals that are difficult to interpret. We also take advantage of statistical tools to analyze the obtained time-series datasets for the whole study area. Results indicate differences in the style and amount of displacement that can be related to the accuracy of the employed DEM.

  19. Evaluating Composition Skills: A Method and Example.

    ERIC Educational Resources Information Center

    McLean, James E.; Chissom, Brad S.

    Holistic evaluation is a reliable, valid, and cost-effective alternative to the usual mechanical assessment of writing. Writing samples are scored on a five-point scale against an overall impression of development, organization, and coherentness. The method was applied to the Communication Activities Skills Project (CASP) for grades 3-12. Writing…

  20. Glacier Volume Change Estimation Using Time Series of Improved Aster Dems

    NASA Astrophysics Data System (ADS)

    Girod, Luc; Nuth, Christopher; Kääb, Andreas

    2016-06-01

    Volume change data is critical to the understanding of glacier response to climate change. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a unique source of systematic stereoscopic images covering the whole globe at 15m resolution and at a consistent quality for over 15 years. While satellite stereo sensors with significantly improved radiometric and spatial resolution are available to date, the potential of ASTER data lies in its long consistent time series that is unrivaled, though not fully exploited for change analysis due to lack of data accuracy and precision. Here, we developed an improved method for ASTER DEM generation and implemented it in the open source photogrammetric library and software suite MicMac. The method relies on the computation of a rational polynomial coefficients (RPC) model and the detection and correction of cross-track sensor jitter in order to compute DEMs. ASTER data are strongly affected by attitude jitter, mainly of approximately 4 km and 30 km wavelength, and improving the generation of ASTER DEMs requires removal of this effect. Our sensor modeling does not require ground control points and allows thus potentially for the automatic processing of large data volumes. As a proof of concept, we chose a set of glaciers with reference DEMs available to assess the quality of our measurements. We use time series of ASTER scenes from which we extracted DEMs with a ground sampling distance of 15m. Our method directly measures and accounts for the cross-track component of jitter so that the resulting DEMs are not contaminated by this process. Since the along-track component of jitter has the same direction as the stereo parallaxes, the two cannot be separated and the elevations extracted are thus contaminated by along-track jitter. Initial tests reveal no clear relation between the cross-track and along-track components so that the latter seems not to be

  1. DEM Simulation of Rotational Disruption of Rubble-Pile Asteroids

    NASA Astrophysics Data System (ADS)

    Sanchez, Paul; Scheeres, D. J.

    2010-10-01

    We report on our study of rotation induced disruption of a self-gravitating granular aggregate by using a Discrete Element Method (DEM) granular dynamics code, a class of simulation commonly used in the granular mechanics community. Specifically, we simulate the behavior of a computer simulated asteroid when subjected to an array of rotation rates that cross its disruption limit. The code used to carry out these studies implements a Soft-sphere DEM method as applied for granular systems. In addition a novel algorithm to calculate self-gravitating forces which makes use of the DEM static grid has been developed and implemented in the code. By using a DEM code, it is possible to model a poly-disperse aggregate with a specified size distribution power law, incorporate contact forces such as dry cohesion and friction, and compute internal stresses within the gravitational aggregate. This approach to the modeling of gravitational aggregates is complementary to and distinctly different than other approaches reported in the literature. The simulations use both 2D and 3D modeling for analysis. One aim of this work is to understand the basic processes and dynamics of aggregates during the disruption process. We have used these simulations to understand how to form a contact binary that mimics observed asteroid shapes, how to accelerate the rotation rate of the aggregate so that it has enough time to reshape and find a stable configuration and how to analyze a system that has an occasionally changing shape. From a more physical point of view, we have focused on the understanding of the dynamics of the reshaping process, the evolution of internal stresses during this reshaping and finding the critical disruption angular velocity. This research was supported by a grant from NASA's PG&G Program: NNX10AJ66G

  2. Evaluating Sleep Disturbance: A Review of Methods

    NASA Technical Reports Server (NTRS)

    Smith, Roy M.; Oyung, R.; Gregory, K.; Miller, D.; Rosekind, M.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    There are three general approaches to evaluating sleep disturbance in regards to noise: subjective, behavioral, and physiological. Subjective methods range from standardized questionnaires and scales to self-report measures designed for specific research questions. There are two behavioral methods that provide useful sleep disturbance data. One behavioral method is actigraphy, a motion detector that provides an empirical estimate of sleep quantity and quality. An actigraph, worn on the non-dominant wrist, provides a 24-hr estimate of the rest/activity cycle. The other method involves a behavioral response, either to a specific probe or stimuli or subject initiated (e.g., indicating wakefulness). The classic, gold standard for evaluating sleep disturbance is continuous physiological monitoring of brain, eye, and muscle activity. This allows detailed distinctions of the states and stages of sleep, awakenings, and sleep continuity. Physiological delta can be obtained in controlled laboratory settings and in natural environments. Current ambulatory physiological recording equipment allows evaluation in home and work settings. These approaches will be described and the relative strengths and limitations of each method will be discussed.

  3. Collection of medical drug information in pharmacies: Drug Event Monitoring (DEM) in Japan.

    PubMed

    Hayashi, Sei-ichiro; Nanaumi, Akira; Akiba, Yasuji; Komiyama, Takako; Takeuchi, Koichi

    2005-07-01

    To establish a system for collecting and reporting information from community pharmacists such as that on adverse effects, the Japan Pharmaceutical Association (JPA) conducts Drug Event Monitoring (DEM). In the fiscal year 2002, a survey was carried out to clarify the incidence of sleepiness due to antiallergic drugs. The investigated active ingredients were ebastine, fexofenadine hydrochloride, cetirizine hydrochloride, and loratadine. Community pharmacists asked the following question to patients who visited their pharmacies: "Have you ever become sleepy after taking this drug?" During a 4-week survey period, reports of 94256 cases were collected. To evaluate the incidence of sleepiness, we analyzed cases in which reports showed alleged absence of concomitant oral drugs, and drug use in conformity with the dose and method described in package inserts. The incidence of sleepiness was significantly different among the drugs (chi(2)-test, p<0.001). The observed incidences of sleepiness due to the drugs (8.8-20.5%) were higher than those described in each package insert (1.8-6.35%). This may be because an active question was used ("Have you ever become sleepy after taking this drug?"). Active intervention by pharmacists may be useful for collecting more information on improvement in the QOL of patients and safety. In addition, the pharmacists were asked to report events other than "sleepiness" in the free description column of the report. Some symptoms not described in the package inserts were reported, suggesting that DEM may lead to the discovery of new adverse effects. These results suggest that community pharmacists have a good opportunity to collect information in DEM, and safety information such as that on adverse effects can be obtained from pharmacies. PMID:15997212

  4. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  5. Automatic Detection and Boundary Extraction of Lunar Craters Based on LOLA DEM Data

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ling, ZongCheng; Zhang, Jiang; Wu, ZhongChen

    2015-07-01

    Impact-induced circular structures, known as craters, are the most obvious geographic and geomorphic features on the Moon. The studies of lunar carters' patterns and spatial distributions play an important role in understanding geologic processes of the Moon. In this paper, we proposed a method based on digital elevation model (DEM) data from lunar orbiter laser altimeter to detect the lunar craters automatically. Firstly, the DEM data of study areas are converted to a series of spatial fields having different scales, in which all overlapping depressions are detected in order (larger depressions first, then the smaller ones). Then, every depression's true boundary is calculated by Fourier expansion and shape parameters are computed. Finally, we recognize the craters from training sets manually and build a binary decision tree to automatically classify the identified depressions into craters and non-craters. In addition, our crater-detection method can provides a fast and reliable evaluation of ages of lunar geologic units, which is of great significance in lunar stratigraphy studies as well as global geologic mapping.

  6. Evaluation of Dynamic Methods for Earthwork Assessment

    NASA Astrophysics Data System (ADS)

    Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína

    2015-05-01

    Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.

  7. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ∼0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ∼2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ∼0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  8. Ice volumes in the Himalayas and Karakoram: evaluating different assessment methods

    NASA Astrophysics Data System (ADS)

    Frey, Holger; Machguth, Horst; Huggel, Christian; Bajracharya, Samjwal; Bolch, Tobias; Kulkarni, Anil; Linsbauer, Andreas; Stoffel, Markus; Salzmann, Nadine

    2013-04-01

    Knowledge about volumes and the ice thickness distribution of Himalayan and Karakoram (HK) glaciers are required for assessing the future evolution, and estimating the sea-level rise potential of these ice bodies, as well as predicting impacts on the hydrological cycle. As field measurements of glacier thicknesses are sparse and restricted to individual glaciers, ice thickness and volume assessments on a larger scale have to rely strongly on modeling approaches. Here, we estimate ice volumes of all glaciers in HK region using three different approaches, compare the results, and examine related uncertainties and variability. The approaches used include volume-thickness relations using different scaling parameters, a slope-dependent thickness estimation, and a new approach to model the ice-thickness distribution based only on digital glacier outlines and a digital elevation model (DEM). By applying different combinations of model parameters and by altering glacier areas by ±5%, uncertainties related to the different methods are evaluated. Glacier outlines have been taken from the Randolph Glacier Inventory (RGI), the International Centre for Integrated Mountain Development (ICIMOD), and minor changes and additions in some regions; topographic information has been obtained from the Shuttle Radar Topography Mission (SRTM) DEM for all methods. The volume-area scaling approach resulted in glacier volumes ranging from 3632 to 6455 km3, depending on the scaling parameters used. The slope-dependent thickness estimations generated a total ice volume of 3335 km3; and a total volume of 2955 km3 resulted from the modified ice-thickness distribution model. Results of the distributed ice thickness modeling are clearly at the lowermost bound of previous estimates, and possibly hint at an overestimation of the potential contribution from HK glaciers to sea-level rise. The range of results also indicates that volume estimations are subject to large uncertainties. Although they are

  9. Creating improved ASTER DEMs over glacierized terrain

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2006-12-01

    Digital elevation models (DEMs) produced from ASTER stereo imagery over glacierized terrain frequently contain data voids, which some software packages fill by interpolation. Even when interpolation is applied, the results are often not accurate enough for studies of glacier thickness changes. DEMs are created by automatic cross-correlation between the image pairs, and rely on spatial variability in the digital number (DN) values for this process. Voids occur in radiometrically homogeneous regions, such as glacier accumulation areas covered with uniform snow, due to lack of correlation. The same property that leads to lack of correlation makes possible the derivation of elevation information from photoclinometry, also known as shape-from-shading. We demonstrate a technique to produce improved DEMs from ASTER data by combining the results from conventional cross-correlation DEM-generation software with elevation information produced from shape-from-shading in the accumulation areas of glacierized terrain. The resulting DEMs incorporate more information from the imagery, and the filled voids more accurately represent the glacier surface. This will allow for more accurate determination of glacier hypsometry and thickness changes, leading to better predictions of response to climate change.

  10. Morphological changes at Mt. Etna detected by TanDEM-X

    NASA Astrophysics Data System (ADS)

    Wegmuller, Urs; Bonforte, Alessandro; De Beni, Emanuela; Guglielmino, Francesco; Strozzi, Tazio

    2014-05-01

    the 2012 TanDEM-X model with the 2000 SRTM DEM in order to evaluate the morphological changes occurred on the volcano during the 12 years time lap. The pixel size of SRTM-DEM is about 90 m and we resampled the TanDEM-X model to fit this value. The results show that most of the variations occurred in the Valle del Bove and on the summit crater areas. In order to compare DEMs with the same pixel size, we performed a further comparison with a 5m ground resolution optical DEM, produced in 2004 and covering only the summit area. The variations in topography have been compared with ground mapping surveys, confirming a good correlation with the spatial extension of the lava flows and of the pyroclastic deposits occurred on Mt. Etna in the last seven years. The comparison between the two DEM's (2004-2012) allows calculating the amount of volcanics emitted and to clearly monitoring the growth and development of the New South East Crater (NSEC). TanDEM-X is a useful tools to monitor volcanic area characterized by a quit frequent activity (a paroxysm every 5-10 days), such us Mt. Etna, especially if concentrated in areas not easily accessible.

  11. Hydrologic validation of a structure-from-motion DEM derived from low-altitude UAV imagery

    NASA Astrophysics Data System (ADS)

    Steiner, Florian; Marzolff, Irene; d'Oleire-Oltmanns, Sebastian

    2015-04-01

    The increasing ease of use of current Unmanned Aerial Vehicles (UAVs) and 3D image processing software has spurred the number of applications relying on high-resolution topographic datasets. Of particular significance in this field is "structure from motion" (SfM), a photogrammetric technique used to generate low-cost digital elevation models (DEMs) for erosion budgeting, measuring of glaciers/lava-flows, archaeological applications and others. It was originally designed to generate 3D-models of buildings, based on unordered collections of images and has become increasingly common in geoscience applications during the last few years. Several studies on the accuracy of this technique already exist, in which the SfM data is mostly compared with Lidar-generated terrain data. The results are mainly positive, indicating that the technique is suitable for such applications. This work aims at validating very high resolution SfM DEMs with a different approach: Not the original elevation data is validated, but data on terrain-related hydrological and geomorphometric parameters derived from the DEM. The study site chosen for this analysis is an abandoned agricultural field near the city of Taroudant, in the semi-arid southern part of Morocco. The site is characterized by aggressive rill and gully erosion and is - apart from sparsely scattered shrub cover - mainly featureless. An area of 5.7 ha, equipped with 30 high-precision ground control points (GCPs), was covered with an unmanned aerial vehicle (UAV) in two different heights (85 and 170 m). A selection of 160 images was used to generate several high-resolution DEMs (2 and 5 cm resolution) of the area using the fully automated SfM software AGISOFT Photoscan. For comparison purposes, a conventional photogrammetry-based workflow using the Leica Photogrammetry Suite was used to generate a DEM with a resolution of 5 cm (LPS DEM). The evaluation is done by comparison of the SfM DEM with the derived orthoimages and the LPS DEM

  12. Development of a 'bare-earth' SRTM DEM product

    NASA Astrophysics Data System (ADS)

    O'Loughlin, Fiachra; Paiva, Rodrigo; Durand, Michael; Alsdorf, Douglas; Bates, Paul

    2015-04-01

    We present the methodology and results from the development of a near-global 'bare-earth' Digital Elevation Model (DEM) derived from the Shuttle Radar Topography Mission (SRTM) data. Digital Elevation Models are the most important input for hydraulic modelling, as the DEM quality governs the accuracy of the model outputs. While SRTM is currently the best near-globally [60N to 60S] available DEM, it requires adjustments to reduce the vegetation contamination and make it useful for hydrodynamic modelling over heavily vegetated areas (e.g. tropical wetlands). Unlike previous methods of accounting for vegetation contamination, which concentrated on correcting relatively small areas and usually applied a static adjustment, we account for vegetation contamination globally and apply a spatial varying correction, based on information about canopy height and density. In creating the final 'bare-earth' SRTM DEM dataset, we produced three different 'bare-earth' SRTM products. The first applies global parameters, while the second and third products apply parameters that are regionalised based on either climatic zones or vegetation types, respectively. We also tested two different canopy density proxies of different spatial resolution. Using ground elevations obtained from the ICESat GLA14 satellite altimeter, we calculate the residual errors for the raw SRTM and the three 'bare-earth' SRTM products and compare performances. The three 'bare-earth' products all show large improvements over the raw SRTM in vegetated areas with the overall mean bias reduced by between 75 and 92% from 4.94 m to 0.40 m. The overall standard deviation is reduced by between 29 and 33 % from 7.12 m to 4.80 m. As expected, improvements are higher in areas with denser vegetation. The final 'bare-earth' SRTM dataset is available at 3 arc-second with lower vertical height errors and less noise than the original SRTM product.

  13. Efficient parallel CFD-DEM simulations using OpenMP

    NASA Astrophysics Data System (ADS)

    Amritkar, Amit; Deb, Surya; Tafti, Danesh

    2014-01-01

    The paper describes parallelization strategies for the Discrete Element Method (DEM) used for simulating dense particulate systems coupled to Computational Fluid Dynamics (CFD). While the field equations of CFD are best parallelized by spatial domain decomposition techniques, the N-body particulate phase is best parallelized over the number of particles. When the two are coupled together, both modes are needed for efficient parallelization. It is shown that under these requirements, OpenMP thread based parallelization has advantages over MPI processes. Two representative examples, fairly typical of dense fluid-particulate systems are investigated, including the validation of the DEM-CFD and thermal-DEM implementation with experiments. Fluidized bed calculations are performed on beds with uniform particle loading, parallelized with MPI and OpenMP. It is shown that as the number of processing cores and the number of particles increase, the communication overhead of building ghost particle lists at processor boundaries dominates time to solution, and OpenMP which does not require this step is about twice as fast as MPI. In rotary kiln heat transfer calculations, which are characterized by spatially non-uniform particle distributions, the low overhead of switching the parallelization mode in OpenMP eliminates the load imbalances, but introduces increased overheads in fetching non-local data. In spite of this, it is shown that OpenMP is between 50-90% faster than MPI.

  14. Validation of DEM prediction for granular avalanches on irregular terrain

    NASA Astrophysics Data System (ADS)

    Mead, Stuart R.; Cleary, Paul W.

    2015-09-01

    Accurate numerical simulation can provide crucial information useful for a greater understanding of destructive granular mass movements such as rock avalanches, landslides, and pyroclastic flows. It enables more informed and relatively low cost investigation of significant risk factors, mitigation strategy effectiveness, and sensitivity to initial conditions, material, or soil properties. In this paper, a granular avalanche experiment from the literature is reanalyzed and used as a basis to assess the accuracy of discrete element method (DEM) predictions of avalanche flow. Discrete granular approaches such as DEM simulate the motion and collisions of individual particles and are useful for identifying and investigating the controlling processes within an avalanche. Using a superquadric shape representation, DEM simulations were found to accurately reproduce transient and static features of the avalanche. The effect of material properties on the shape of the avalanche deposit was investigated. The simulated avalanche deposits were found to be sensitive to particle shape and friction, with the particle shape causing the sensitivity to friction to vary. The importance of particle shape, coupled with effect on the sensitivity to friction, highlights the importance of quantifying and including particle shape effects in numerical modeling of granular avalanches.

  15. Graphical methods for evaluating covering arrays

    DOE PAGESBeta

    Kim, Youngil; Jang, Dae -Heung; Anderson-Cook, Christine M.

    2016-06-01

    Covering arrays relax the condition of orthogonal arrays by only requiring that all combination of levels be covered but not requiring that the appearance of all combination of levels be balanced. This allows for a much larger number of factors to be simultaneously considered but at the cost of poorer estimation of the factor effects. To better understand patterns between sets of columns and evaluate the degree of coverage to compare and select between alternative arrays, we suggest several new graphical methods that show some of the patterns of coverage for different designs. As a result, these graphical methods formore » evaluating covering arrays are illustrated with a few examples.« less

  16. Field evaluation of a VOST sampling method

    SciTech Connect

    Jackson, M.D.; Johnson, L.D.; Fuerst, R.G.; McGaughey, J.F.; Bursey, J.T.; Merrill, R.G.

    1994-12-31

    The VOST (SW-846 Method 0030) specifies the use of Tenax{reg_sign} and a particular petroleum-based charcoal (SKC Lot 104, or its equivalent), that is no longer commercially available. In field evaluation studies of VOST methodology, a replacement petroleum-based charcoal has been used: candidate replacement sorbents for charcoal were studied, and Anasorb{reg_sign} 747, a carbon-based sorbent, was selected for field testing. The sampling train was modified to use only Anasorb{reg_sign} in the back tube and Tenax{reg_sign} in the two front tubes to avoid analytical difficulties associated with the analysis of the sequential bed back tube used in the standard VOST train. The standard (SW-846 Method 0030) and the modified VOST methods were evaluated at a chemical manufacturing facility using a quadruple probe system with quadruple trains. In this field test, known concentrations of the halogenated volatile organic compounds, that are listed in the Clean Air Act Amendments of 1990, Title 3, were introduced into the VOST train and the modified VOST train, using the same certified gas cylinder as a source of test compounds. Statistical tests of the comparability of methods were performed on a compound-by-compound basis. For most compounds, the VOST and modified VOST methods were found to be statistically equivalent.

  17. Electromagnetic Imaging Methods for Nondestructive Evaluation Applications

    PubMed Central

    Deng, Yiming; Liu, Xin

    2011-01-01

    Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions. PMID:22247693

  18. ArcGeomorphometry: A toolbox for geomorphometric characterisation of DEMs in the ArcGIS environment

    NASA Astrophysics Data System (ADS)

    Rigol-Sanchez, Juan P.; Stuart, Neil; Pulido-Bosch, Antonio

    2015-12-01

    A software tool is described for the extraction of geomorphometric land surface variables and features from Digital Elevation Models (DEMs). The ArcGeomorphometry Toolbox consists of a series of Python/Numpy processing functions, presented through an easy-to-use graphical menu for the widely used ArcGIS package. Although many GIS provide some operations for analysing DEMs, the methods are often only partially implemented and can be difficult to find and used effectively. Since the results of automated characterisation of landscapes from DEMs are influenced by the extent being considered, the resolution of the source DEM and the size of the kernel (analysis window) used for processing, we have developed a tool to allow GIS users to flexibly apply several multi-scale analysis methods to parameterise and classify a DEM into discrete land surface units. Users can control the threshold values for land surface classifications. The size of the processing kernel can be used to identify land surface features across a range of landscape scales. The pattern of land surface units from each attempt at classification is displayed immediately and can then be processed in the GIS alongside additional data that can assist with a visual assessment and comparison of a series of results. The functionality of the ArcGeomorphometry toolbox is described using an example DEM.

  19. Uncertainty of SWAT model at different DEM resolutions in a large mountainous watershed.

    PubMed

    Zhang, Peipei; Liu, Ruimin; Bao, Yimeng; Wang, Jiawei; Yu, Wenwen; Shen, Zhenyao

    2014-04-15

    The objective of this study was to enhance understanding of the sensitivity of the SWAT model to the resolutions of Digital Elevation Models (DEMs) based on the analysis of multiple evaluation indicators. The Xiangxi River, a large tributary of Three Gorges Reservoir in China, was selected as the study area. A range of 17 DEM spatial resolutions, from 30 to 1000 m, was examined, and the annual and monthly model outputs based on each resolution were compared. The following results were obtained: (i) sediment yield was greatly affected by DEM resolution; (ii) the prediction of dissolved oxygen load was significantly affected by DEM resolutions coarser than 500 m; (iii) Total Nitrogen (TN) load was not greatly affected by the DEM resolution; (iv) Nitrate Nitrogen (NO₃-N) and Total Phosphorus (TP) loads were slightly affected by the DEM resolution; and (v) flow and Ammonia Nitrogen (NH₄-N) load were essentially unaffected by the DEM resolution. The flow and dissolved oxygen load decreased more significantly in the dry season than in the wet and normal seasons. Excluding flow and dissolved oxygen, the uncertainties of the other Hydrology/Non-point Source (H/NPS) pollution indicators were greater in the wet season than in the dry and normal seasons. Considering the temporal distribution uncertainties, the optimal DEM resolutions for flow was 30-200 m, for sediment and TP was 30-100 m, for dissolved oxygen and NO₃-N was 30-300 m, for NH₄-N was 30 to 70 m and for TN was 30-150 m. PMID:24509347

  20. Comparative evaluation of pelvic allograft selection methods.

    PubMed

    Bousleiman, Habib; Paul, Laurent; Nolte, Lutz-Peter; Reyes, Mauricio

    2013-05-01

    This paper presents a firsthand comparative evaluation of three different existing methods for selecting a suitable allograft from a bone storage bank. The three examined methods are manual selection, automatic volume-based registration, and automatic surface-based registration. Although the methods were originally published for different bones, they were adapted to be systematically applied on the same data set of hemi-pelvises. A thorough experiment was designed and applied in order to highlight the advantages and disadvantages of each method. The methods were applied on the whole pelvis and on smaller fragments, thus producing a realistic set of clinical scenarios. Clinically relevant criteria are used for the assessment such as surface distances and the quality of the junctions between the donor and the receptor. The obtained results showed that both automatic methods outperform the manual counterpart. Additional advantages of the surface-based method are in the lower computational time requirements and the greater contact surfaces where the donor meets the recipient. PMID:23299829

  1. Assessment methods for the evaluation of vitiligo.

    PubMed

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials. PMID:22416879

  2. Evaluation of Alternate Surface Passivation Methods (U)

    SciTech Connect

    Clark, E

    2005-05-31

    Stainless steel containers were assembled from parts passivated by four commercial vendors using three passivation methods. The performance of these containers in storing hydrogen isotope mixtures was evaluated by monitoring the composition of initially 50% H{sub 2} 50% D{sub 2} gas with time using mass spectroscopy. Commercial passivation by electropolishing appears to result in surfaces that do not catalyze hydrogen isotope exchange. This method of surface passivation shows promise for tritium service, and should be studied further and considered for use. On the other hand, nitric acid passivation and citric acid passivation may not result in surfaces that do not catalyze the isotope exchange reaction H{sub 2} + D{sub 2} {yields} 2HD. These methods should not be considered to replace the proprietary passivation processes of the two current vendors used at the Savannah River Site Tritium Facility.

  3. Arthroscopic proficiency: methods in evaluating competency

    PubMed Central

    2013-01-01

    Background The current paradigm of arthroscopic training lacks objective evaluation of technical ability and its adequacy is concerning given the accelerating complexity of the field. To combat insufficiencies, emphasis is shifting towards skill acquisition outside the operating room and sophisticated assessment tools. We reviewed (1) the validity of cadaver and surgical simulation in arthroscopic training, (2) the role of psychomotor analysis and arthroscopic technical ability, (3) what validated assessment tools are available to evaluate technical competency, and (4) the quantification of arthroscopic proficiency. Methods The Medline and Embase databases were searched for published articles in the English literature pertaining to arthroscopic competence, arthroscopic assessment and evaluation and objective measures of arthroscopic technical skill. Abstracts were independently evaluated and exclusion criteria included articles outside the scope of knee and shoulder arthroscopy as well as original articles about specific therapies, outcomes and diagnoses leaving 52 articles citied in this review. Results Simulated arthroscopic environments exhibit high levels of internal validity and consistency for simple arthroscopic tasks, however the ability to transfer complex skills to the operating room has not yet been established. Instrument and force trajectory data can discriminate between technical ability for basic arthroscopic parameters and may serve as useful adjuncts to more comprehensive techniques. There is a need for arthroscopic assessment tools for standardized evaluation and objective feedback of technical skills, yet few comprehensive instruments exist, especially for the shoulder. Opinion on the required arthroscopic experience to obtain proficiency remains guarded and few governing bodies specify absolute quantities. Conclusions Further validation is required to demonstrate the transfer of complex arthroscopic skills from simulated environments to the

  4. Alternative haplotype construction methods for genomic evaluation.

    PubMed

    Jónás, Dávid; Ducrocq, Vincent; Fouilloux, Marie-Noëlle; Croiseau, Pascal

    2016-06-01

    Genomic evaluation methods today use single nucleotide polymorphism (SNP) as genomic markers to trace quantitative trait loci (QTL). Today most genomic prediction procedures use biallelic SNP markers. However, SNP can be combined into short, multiallelic haplotypes that can improve genomic prediction due to higher linkage disequilibrium between the haplotypes and the linked QTL. The aim of this study was to develop a method to identify the haplotypes, which can be expected to be superior in genomic evaluation, as compared with either SNP or other haplotypes of the same size. We first identified the SNP (termed as QTL-SNP) from the bovine 50K SNP chip that had the largest effect on the analyzed trait. It was assumed that these SNP were not the causative mutations and they merely indicated the approximate location of the QTL. Haplotypes of 3, 4, or 5 SNP were selected from short genomic windows surrounding these markers to capture the effect of the QTL. Two methods described in this paper aim at selecting the most optimal haplotype for genomic evaluation. They assumed that if an allele has a high frequency, its allele effect can be accurately predicted. These methods were tested in a classical validation study using a dairy cattle population of 2,235 bulls with genotypes from the bovine 50K SNP chip and daughter yield deviations (DYD) on 5 dairy cattle production traits. Combining the SNP into haplotypes was beneficial with all tested haplotypes, leading to an average increase of 2% in terms of correlations between DYD and genomic breeding value estimates compared with the analysis when the same SNP were used individually. Compared with haplotypes built by merging the QTL-SNP with its flanking SNP, the haplotypes selected with the proposed criteria carried less under- and over-represented alleles: the proportion of alleles with frequencies <1 or >40% decreased, on average, by 17.4 and 43.4%, respectively. The correlations between DYD and genomic breeding value

  5. DEM interpolation based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Jiao, Limin; Liu, Yaolin

    2005-10-01

    This paper proposed a systemic resolution scheme of Digital Elevation model (DEM) interpolation based on Artificial Neural Networks (ANNs). In this paper, we employ BP network to fit terrain surface, and then detect and eliminate the samples with gross errors. This paper uses Self-organizing Feature Map (SOFM) to cluster elevation samples. The study area is divided into many more homogenous tiles after clustering. BP model is employed to interpolate DEM in each cluster. Because error samples are eliminated and clusters are built, interpolation result is better. The case study indicates that ANN interpolation scheme is feasible. It also shows that ANN can get a more accurate result by comparing ANN with polynomial and spline interpolation. ANN interpolation doesn't need to determine the interpolation function beforehand, so manmade influence is lessened. The ANN interpolation is more automatic and intelligent. At the end of the paper, we propose the idea of constructing ANN surface model. This model can be used in multi-scale DEM visualization, and DEM generalization, etc.

  6. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  7. Evaluation of Scaling Methods for Rotorcraft Icing

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Kreeger, Richard E.

    2010-01-01

    This paper reports result of an experimental study in the NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the current recommended scaling methods developed for fixed-wing unprotected surface icing applications might apply to representative rotor blades at finite angle of attack. Unlike the fixed-wing case, there is no single scaling method that has been systematically developed and evaluated for rotorcraft icing applications. In the present study, scaling was based on the modified Ruff method with scale velocity determined by maintaining constant Weber number. Models were unswept NACA 0012 wing sections. The reference model had a chord of 91.4 cm and scale model had a chord of 35.6 cm. Reference tests were conducted with velocities of 76 and 100 kt (39 and 52 m/s), droplet MVDs of 150 and 195 fun, and with stagnation-point freezing fractions of 0.3 and 0.5 at angle of attack of 0deg and 5deg. It was shown that good ice shape scaling was achieved for NACA 0012 airfoils with angle of attack lip to 5deg.

  8. An elementary calculus method for evaluating ?

    NASA Astrophysics Data System (ADS)

    Lee, Tuo Yeong; Xiong, Yuxuan

    2014-08-01

    We use freshman calculus to prove that for θ ∈ (0, π) and n = 0, 1, 2, … ; in particular, we obtain a simple unified method for evaluating the following infinite series:

  9. SHADED RELIEF, HILLSHADE, DIGITAL ELEVATION MODEL (DEM), NEVADA

    EPA Science Inventory

    Shaded relief of the state of Nevada developed from 1-degree US Geological Survey (USGS) Digital Elevation Models (DEMs). DEM is a terminology adopted by the USGS to describe terrain elevation data sets in a digital raster form.

  10. SHADED RELIEF, HILLSHADE, DIGITAL ELEVATION MODEL (DEM), ARIZONA

    EPA Science Inventory

    Shaded relief of the state of Arizona developed from 1-degree US Geological Survey (USGS) Digital Elevation Models (DEMs). DEM is a terminology adopted by the USGS to describe terrain elevation data sets in a digital raster form.

  11. 3D DEM analyses of the 1963 Vajont rock slide

    NASA Astrophysics Data System (ADS)

    Boon, Chia Weng; Houlsby, Guy; Utili, Stefano

    2013-04-01

    The 1963 Vajont rock slide has been modelled using the distinct element method (DEM). The open-source DEM code, YADE (Kozicki & Donzé, 2008), was used together with the contact detection algorithm proposed by Boon et al. (2012). The critical sliding friction angle at the slide surface was sought using a strength reduction approach. A shear-softening contact model was used to model the shear resistance of the clayey layer at the slide surface. The results suggest that the critical sliding friction angle can be conservative if stability analyses are calculated based on the peak friction angles. The water table was assumed to be horizontal and the pore pressure at the clay layer was assumed to be hydrostatic. The influence of reservoir filling was marginal, increasing the sliding friction angle by only 1.6˚. The results of the DEM calculations were found to be sensitive to the orientations of the bedding planes and cross-joints. Finally, the failure mechanism was investigated and arching was found to be present at the bend of the chair-shaped slope. References Boon C.W., Houlsby G.T., Utili S. (2012). A new algorithm for contact detection between convex polygonal and polyhedral particles in the discrete element method. Computers and Geotechnics, vol 44, 73-82, doi.org/10.1016/j.compgeo.2012.03.012. Kozicki, J., & Donzé, F. V. (2008). A new open-source software developed for numerical simulations using discrete modeling methods. Computer Methods in Applied Mechanics and Engineering, 197(49-50), 4429-4443.

  12. Image Inpainting Methods Evaluation and Improvement

    PubMed Central

    Vreja, Raluca

    2014-01-01

    With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects. PMID:25136700

  13. Image inpainting methods evaluation and improvement.

    PubMed

    Vreja, Raluca; Brad, Remus

    2014-01-01

    With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects. PMID:25136700

  14. Economic methods for multipollutant analysis and evaluation

    SciTech Connect

    Baasel, W.D.

    1985-01-01

    Since 1572, when miners' lung problems were first linked to dust, man's industrial activity has been increasingly accused of causing disease in man and harm to the environment. Since that time each compound or stream thought to be damaging has been looked at independently. If a gas stream caused the problem the bad compound compositions were reduced to an acceptable level and the problem was considered solved. What happened to substances after they were removed usually was not fully considered until the finding of an adverse effect required it. Until 1970, one usual way of getting rid of many toxic wastes was to place the, in landfills and forget about them. The discovery of sickness caused by substances escaping from the Love Canal landfill has caused a total rethinking of that procedure. This and other incidents clearly showed that taking a substance out of one stream which is discharged to the environment and placing it in another may not be an adequate solution. What must be done is to look at all streams leaving an industrial plant and devise a way to reduce the potentially harmful emissions in those streams to an acceptable level, using methods that are inexpensive. To illustrate conceptually how the environmental assessment approach is a vast improvement over the current methods, an example evaluating effluents from a coal-fired 500 MW power plant is presented. Initially only one substance in one stream is evaluated. This is sulfur oxide leaving in the flue gas.

  15. Sparse Representation and Multiscale Methods - Application to Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Stefanescu, R. E. R.; Patra, A. K.; Bursik, M. I.

    2014-12-01

    In general, a Digital Elevation Model (DEM) is produced either digitizing existing maps and elevation values are interpolated from the contours, or elevation information is collected from stereo imagery on digital photogrammetric workstations. Both methods produce a DEM to the required specification, but each method contains a variety of possible production scenarios, and each method results in DEM cells with totally different character. Common artifacts found in DEM are missing-values at different location which can influence the output of the application that uses this particular DEM. In this work we introduce a numerically-stable multiscale scheme to evaluate the missing-value DEM's quantity of interest (elevation, slope, etc.). This method is very efficient for the case when dealing with large high resolution DEMs that cover large area, resulting in O(106-1010) data points. Our scheme relies on graph-based algorithms and low-rank approximations of the entire adjacency matrix of the DEM's graph. When dealing with large data sets such as DEMs, the Laplacian or kernel matrix resulted from the interaction of the data points is stupendously big. One needs to identify a subspace that capture most of the action of the kernel matrix. By the application of a randomized projection on the graph affinity matrix, a well-conditioned basis is identified for it numerical range. This basis is later used in out-of-sample extension at missing-value location. In many cases, this method beats its classical competitors in terms of accuracy, speed, and robustness.

  16. ALOS DEM quality assessment in a rugged topography, A Lebanese watershed as a case study

    NASA Astrophysics Data System (ADS)

    Abdallah, Chadi; El Hage, Mohamad; Termos, Samah; Abboud, Mohammad

    2014-05-01

    Deriving the morphometric descriptors of the Earth's surface from satellite images is a continuing application in remote sensing, which has been distinctly pushed with the increasing availability of DEMs at different scales, specifically those derived from high to very high-resolution stereoscopic and triscopic image data. The extraction of the morphometric descriptors is affected by the errors of the DEM. This study presents a procedure for assessing the quality of ALOS DEM in terms of position and morphometric indices. It involves evaluating the impact of the production parameters on the altimetric accuracy through checking height differences between Ground Control Points (GCP) and the corresponding DEM points, on the planimetric accuracy by comparing extracted drainage lines with topographic maps, and on the morphometric indices by comparing profiles extracted from the DEM with those measured on the field. A twenty set of triplet-stereo imagery from the PRISM instrument on the ALOS satellite has been processed to acquire a 5 m DEM covering the whole Lebanese territories. The Lebanese topography is characterized by its ruggedness with two parallel mountainous chains embedding a depression (The Bekaa Valley). The DEM was extracted via PCI Geomatica 2013. Each of the images required 15 GCPs and around 50 tie points. Field measurements was carried out using differential GPS (Trimble GeoXH6000, ProXRT receiver and the LaserACE 1000 Rangefinder) on Al Awali watershed (482 km2, about 5% of the Lebanese terrain). 3545 GPS points were collected at all ranges of elevation specifying the Lebanese terrain diversity, ranging from cliffy, to steep and gently undulating terrain along with narrow and wide flood plains and including predetermined profiles. Moreover, definite points such as road intersections and river beds were also measured in order to assess the extracted streams from the DEM. ArcGIS 10.1 was also utilized to extract the drainage network. Preliminary results

  17. Quality assessment of TanDEM-X DEMs using airborne LiDAR, photogrammetry and ICESat elevation data

    NASA Astrophysics Data System (ADS)

    Rao, Y. S.; Deo, R.; Nalini, J.; Pillai, A. M.; Muralikrishnan, S.; Dadhwal, V. K.

    2014-11-01

    TanDEM-X mission has been acquiring InSAR data to produce high resolution global DEM with greater vertical accuracy since 2010. In this study, TanDEM-X CoSSC data were processed to produce DEMs at 6 m spatial resolution for two test areas of India. The generated DEMs were compared with DEMs available from airborne LiDAR, photogrammetry, SRTM and ICESat elevation point data. The first test site is in Bihar state of India with almost flat terrain and sparse vegetation cover and the second test site is around Godavari river in Andhra Pradesh (A.P.) state of India with flat to moderate hilly terrain. The quality of the DEMs in these two test sites has been specified in terms of most widely used accuracy measures viz. mean, standard deviation, skew and RMSE. The TanDEM-X DEM over Bihar test area gives 5.0 m RMSE by taking airborne LiDAR data as reference. With ICESat elevation data available at 9000 point locations, RMSE of 5.9 m is obtained. Similarly, TanDEM-X DEM for Godavari area was compared with high resolution aerial photogrammetric DEM and SRTM DEM and found RMSE of 5.3 m and 7.5 m respectively. When compared with ICESat elevation data at several point location and also the same point locations of photogrammetric DEM and SRTM, the RMS errors are 4.1 m, 3.5 m and 4.3 m respectively. DEMs were also compared for open-pit coal mining area where elevation changes from -147 m to 189 m. X- and Y-profiles of all DEMs were also compared to see their trend and differences.

  18. Integration of 2-D hydraulic model and high-resolution lidar-derived DEM for floodplain flow modeling

    NASA Astrophysics Data System (ADS)

    Shen, D.; Wang, J.; Cheng, X.; Rui, Y.; Ye, S.

    2015-08-01

    The rapid progress of lidar technology has made the acquirement and application of high-resolution digital elevation model (DEM) data increasingly popular, especially in regards to the study of floodplain flow. However, high-resolution DEM data pose several disadvantages for floodplain modeling studies; e.g., the data sets contain many redundant interpolation points, large numbers of calculations are required to work with data, and the data do not match the size of the computational mesh. Two-dimensional (2-D) hydraulic modeling, which is a popular method for analyzing floodplain flow, offers highly precise elevation parameterization for computational mesh while ignoring much of the micro-topographic information of the DEM data itself. We offer a flood simulation method that integrates 2-D hydraulic model results and high-resolution DEM data, thus enabling the calculation of flood water levels in DEM grid cells through local inverse distance-weighted interpolation. To get rid of the false inundation areas during interpolation, it employs the run-length encoding method to mark the inundated DEM grid cells and determine the real inundation areas through the run-length boundary tracing technique, which solves the complicated problem of connectivity between DEM grid cells. We constructed a 2-D hydraulic model for the Gongshuangcha detention basin, which is a flood storage area of Dongting Lake in China, by using our integrated method to simulate the floodplain flow. The results demonstrate that this method can solve DEM associated problems efficiently and simulate flooding processes with greater accuracy than simulations only with DEM.

  19. CFD-DEM simulations of current-induced dune formation and morphological evolution

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Xiao, Heng

    2016-06-01

    Understanding the fundamental mechanisms of sediment transport, particularly those during the formation and evolution of bedforms, is of critical scientific importance and has engineering relevance. Traditional approaches of sediment transport simulations heavily rely on empirical models, which are not able to capture the physics-rich, regime-dependent behaviors of the process. With the increase of available computational resources in the past decade, CFD-DEM (computational fluid dynamics-discrete element method) has emerged as a viable high-fidelity method for the study of sediment transport. However, a comprehensive, quantitative study of the generation and migration of different sediment bed patterns using CFD-DEM is still lacking. In this work, current-induced sediment transport problems in a wide range of regimes are simulated, including 'flat bed in motion', 'small dune', 'vortex dune' and suspended transport. Simulations are performed by using SediFoam, an open-source, massively parallel CFD-DEM solver developed by the authors. This is a general-purpose solver for particle-laden flows tailed for particle transport problems. Validation tests are performed to demonstrate the capability of CFD-DEM in the full range of sediment transport regimes. Comparison of simulation results with experimental and numerical benchmark data demonstrates the merits of CFD-DEM approach. In addition, the improvements of the present simulations over existing studies using CFD-DEM are presented. The present solver gives more accurate prediction of sediment transport rate by properly accounting for the influence of particle volume fraction on the fluid flow. In summary, this work demonstrates that CFD-DEM is a promising particle-resolving approach for probing the physics of current-induced sediment transport.

  20. A Method for Missile Autopilot Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Eguchi, Hirofumi

    The essential benefit of HardWare-In-the-Loop (HWIL) simulation can be summarized as that the performance of autopilot system is evaluated realistically without the modeling error by using actual hardware such as seeker systems, autopilot systems and servo equipments. HWIL simulation, however, requires very expensive facilities: in these facilities, the target model generator is the indispensable subsystem. In this paper, one example of HWIL simulation facility with a target model generator for RF seeker systems is introduced at first. But this generator has the functional limitation on the line-of-sight angle as almost other generators, then, a test method to overcome the line-of-sight angle limitation is proposed.

  1. [Evaluation of autonomic dysfunction by novel methods].

    PubMed

    Ando, Yukio; Obayashi, Konen

    2004-07-01

    The autonomic nervous system innervates every organ in the body. Since autonomic disturbances affect patient survival, an understanding and recognition of these disturbances are important. We adopted several new methods to evaluate autonomic function accurately. 123I-metaiodobenzylguanidine scintigraphy can assess the cardiac autonomic function even in the presence of cardiac arrhythmia. Laser-Doppler flowmetry, ultrasonographic study in the vessels and near-infrared spectrophotoscopy techniques serve as useful markers for screening the dysfunction of vasomotor neurons and blood circulation. Electrogastrography and the circadian rhythms of protein C secretion could be markers of the visceromotor nerves in the abdomen. Electrogastrography is a particularly useful tool for reflecting on functional changes in gastrointestinal motility. The evaluation of anemia could be a marker of autonomic dysfunction in the kidney and bone marrow in patients with familial amyloidotic polyneuropathy, pandysautonomia, and multiple system atrophy. Normocytic and normochromic anemia correlated with the severity of autonomic dysfunction were shown in these patients. We also evaluated the dysfunction of the neuroendocrine system and sudomotor neuron using our new autonomic function tests. The glucose-tolerance test could become one of the most useful clinical tools for detecting autonomic dysfunction in the endocrine system. Microhydrography and thermography could be useful tools for diagnosing the lesion site of dyshidrosis. Moreover, it is clinically important to check the systemic circulation and autonomic function in patients treated with sildenafil citrate and organ transplantation to save their lives. Our new autonomic function tests, such as laser-Doppler flowmetry and 123I-metaiodobenzylguanidine scintigraphy, are crucial tools in supplying the best symptomatic treatment for such patients. PMID:15344558

  2. Spaceborne radar interferometry for coastal DEM construction

    USGS Publications Warehouse

    Hong, S.-H.; Lee, C.-W.; Won, J.-S.; Kwoun, Oh-Ig; Lu, Zhiming

    2005-01-01

    Topographic features in coastal regions including tidal flats change more significantly than landmass, and are characterized by extremely low slopes. High precision DEMs are required to monitor dynamic changes in coastal topography. It is difficult to obtain coherent interferometric SAR pairs especially over tidal flats mainly because of variation of tidal conditions. Here we focus on i) coherence of multi-pass ERS SAR interferometric pairs and ii) DEM construction from ERS-ENVISAT pairs. Coherences of multi-pass ERS interferograms were good enough to construct DEM under favorable tidal conditions. Coherence in sand dominant area was generally higher than that in muddy surface. The coarse grained coastal areas are favorable for multi-pass interferometry. Utilization of ERS-ENVISAT interferometric pairs is taken a growing interest. We carried out investigation using a cross-interferometric pair with a normal baseline of about 1.3 km, a 30 minutes temporal separation and the height sensitivity of about 6 meters. Preliminary results of ERS-ENVISAT interferometry were not successful due to baseline and unfavorable scattering conditions. ?? 2005 IEEE.

  3. On the Standardization of Vertical Accuracy Figures in Dems

    NASA Astrophysics Data System (ADS)

    Casella, V.; Padova, B.

    2013-01-01

    Digital Elevation Models (DEMs) play a key role in hydrological risk prevention and mitigation: hydraulic numeric simulations, slope and aspect maps all heavily rely on DEMs. Hydraulic numeric simulations require the used DEM to have a defined accuracy, in order to obtain reliable results. Are the DEM accuracy figures clearly and uniquely defined? The paper focuses on some issues concerning DEM accuracy definition and assessment. Two DEM accuracy definitions can be found in literature: accuracy at the interpolated point and accuracy at the nodes. The former can be estimated by means of randomly distributed check points, while the latter by means of check points coincident with the nodes. The two considered accuracy figures are often treated as equivalent, but they aren't. Given the same DEM, assessing it through one or the other approach gives different results. Our paper performs an in-depth characterization of the two figures and proposes standardization coefficients.

  4. Digital image envelope: method and evaluation

    NASA Astrophysics Data System (ADS)

    Huang, H. K.; Cao, Fei; Zhou, Michael Z.; Mogel, Greg T.; Liu, Brent J.; Zhou, Xiaoqiang

    2003-05-01

    Health data security, characterized in terms of data privacy, authenticity, and integrity, is a vital issue when digital images and other patient information are transmitted through public networks in telehealth applications such as teleradiology. Mandates for ensuring health data security have been extensively discussed (for example The Health Insurance Portability and Accountability Act, HIPAA) and health informatics guidelines (such as the DICOM standard) are beginning to focus on issues of data continue to be published by organizing bodies in healthcare; however, there has not been a systematic method developed to ensure data security in medical imaging Because data privacy and authenticity are often managed primarily with firewall and password protection, we have focused our research and development on data integrity. We have developed a systematic method of ensuring medical image data integrity across public networks using the concept of the digital envelope. When a medical image is generated regardless of the modality, three processes are performed: the image signature is obtained, the DICOM image header is encrypted, and a digital envelope is formed by combining the signature and the encrypted header. The envelope is encrypted and embedded in the original image. This assures the security of both the image and the patient ID. The embedded image is encrypted again and transmitted across the network. The reverse process is performed at the receiving site. The result is two digital signatures, one from the original image before transmission, and second from the image after transmission. If the signatures are identical, there has been no alteration of the image. This paper concentrates in the method and evaluation of the digital image envelope.

  5. Multilevel summation method for electrostatic force evaluation.

    PubMed

    Hardy, David J; Wu, Zhe; Phillips, James C; Stone, John E; Skeel, Robert D; Schulten, Klaus

    2015-02-10

    The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation. PMID:25691833

  6. Multilevel Summation Method for Electrostatic Force Evaluation

    PubMed Central

    2015-01-01

    The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation. PMID:25691833

  7. Evaluation of methods to assess physical activity

    NASA Astrophysics Data System (ADS)

    Leenders, Nicole Y. J. M.

    Epidemiological evidence has accumulated that demonstrates that the amount of physical activity-related energy expenditure during a week reduces the incidence of cardiovascular disease, diabetes, obesity, and all-cause mortality. To further understand the amount of daily physical activity and related energy expenditure that are necessary to maintain or improve the functional health status and quality of life, instruments that estimate total (TDEE) and physical activity-related energy expenditure (PAEE) under free-living conditions should be determined to be valid and reliable. Without evaluation of the various methods that estimate TDEE and PAEE with the doubly labeled water (DLW) method in females there will be eventual significant limitations on assessing the efficacy of physical activity interventions on health status in this population. A triaxial accelerometer (Tritrac-R3D, (TT)), an uniaxial (Computer Science and Applications Inc., (CSA)) activity monitor, a Yamax-Digiwalker-500sp°ler , (YX-stepcounter), by measuring heart rate responses (HR method) and a 7-d Physical Activity Recall questionnaire (7-d PAR) were compared with the "criterion method" of DLW during a 7-d period in female adults. The DLW-TDEE was underestimated on average 9, 11 and 15% using 7-d PAR, HR method and TT. The underestimation of DLW-PAEE by 7-d PAR was 21% compared to 47% and 67% for TT and YX-stepcounter. Approximately 56% of the variance in DLW-PAEE*kgsp{-1} is explained by the registration of body movement with accelerometry. A larger proportion of the variance in DLW-PAEE*kgsp{-1} was explained by jointly incorporating information from the vertical and horizontal movement measured with the CSA and Tritrac-R3D (rsp2 = 0.87). Although only a small amount of variance in DLW-PAEE*kgsp{-1} is explained by the number of steps taken per day, because of its low cost and ease of use, the Yamax-stepcounter is useful in studies promoting daily walking. Thus, studies involving the

  8. Volcanic Landform Classification of Iwate Volcano from DEM-Derived Thematic Maps

    NASA Astrophysics Data System (ADS)

    Prima, A. O.; Yoshida, T.

    2004-12-01

    Over the last three decades, digital elevation models (DEMs) have been developed as surface data instead of contour lines to allow numerical analysis or modeling of terrain by computer. DEMs have allowed the development of algorithms to rapidly derive slope, relief, convexity, concavity and aspect of any points of surface, and also have allowed the definition of a number of new morphometric measures i.e. openness (Yokoyama et al., 2002). Openness is an angular measure of the relation between surface relief and horizontal distance. Openness has two viewer perspectives. Positive values, expressing openness above the surface, are high for convex forms, whereas negative values describe this attribute below the surface and are high for concave forms. The emphasis of terrain convexity and concavity in openness maps facilitates the interpretation of landforms on the Earth_fs surface. Prima et al. (2003) proposed automated landform classification using openness and slope with genetic factors. This method had been proved to produce good classification for constructional (alluvial plains, alluvial fans and volcanoes) and erosional (hills and mountains) landforms. The capability of this method to classify landforms from DEMs with genetic factors is important because it allows landform evolution to be numerically analyzed. In this study, we adopted this method to classify volcanic landforms of Iwate Volcano from Honshu, Japan, where volcanic landforms were categorized referring to geological map of Iwate Volcano (Doi, 2000). This process took three steps. First, the characteristic of each category was evaluated against the mean and standard deviation of slope, and both positive and negative openness, in two dimensional feature spaces. Second, the characteristic of each category were observed and the combinations of mean and standard deviation of slope and openness showing high separabilities were selected. We found that the standard deviation of slope, positive and negative

  9. Quality assessment of Digital Elevation Model (DEM) in view of the Altiplano hydrological modeling

    NASA Astrophysics Data System (ADS)

    Satgé, F.; Arsen, A.; Bonnet, M.; Timouk, F.; Calmant, S.; Pilco, R.; Molina, J.; Lavado, W.; Crétaux, J.; HASM

    2013-05-01

    Topography is crucial data input for hydrological modeling but in many regions of the world, the only way to characterize topography is the use of satellite-based Digital Elevation Models (DEM). In some regions, the quality of these DEMs remains poor and induces modeling errors that may or not be compensated by model parameters tuning. In such regions, the evaluation of these data uncertainties is an important step in the modeling procedure. In this study, which focuses on the Altiplano region, we present the evaluation of the two freely available DEM. The shuttle radar topographic mission (SRTM), a product of the National Aeronautics and Space Administration (NASA) and the Advanced Space Born Thermal Emission and Reflection Global Digital Elevation Map (ASTER GDEM), data provided by the Ministry of Economy, Trade and Industry of Japan (MESI) in collaboration with the NASA, are widely used. While the first represents a resolution of 3 arc seconds (90m) the latter is 1 arc second (30m). In order to select the most reliable DEM, we compared the DEM elevation with high qualities control points elevation. Because of its large spatial coverture (track spaced of 30 km with a measure of each 172 m) and its high vertical accuracy which is less than 15 cm in good weather conditions, the Geoscience Laser Altimeter System (GLAS) on board on the Ice, Cloud and Land elevation Satellite of NASA (ICESat) represent the better solution to establish a high quality elevation database. After a quality check, more than 150 000 ICESat/GLAS measurements are suitable in terms of accuracy for the Altiplano watershed. This data base has been used to evaluate the vertical accuracy for each DEM. Regarding to the full spatial coverture; the comparison has been done for both, all kind of land coverture, range altitude and mean slope.

  10. Volcanic geomorphology using TanDEM-X

    NASA Astrophysics Data System (ADS)

    Poland, Michael; Kubanek, Julia

    2016-04-01

    Topography is perhaps the most fundamental dataset for any volcano, yet is surprisingly difficult to collect, especially during the course of an eruption. For example, photogrammetry and lidar are time-intensive and often expensive, and they cannot be employed when the surface is obscured by clouds. Ground-based surveys can operate in poor weather but have poor spatial resolution and may expose personnel to hazardous conditions. Repeat passes of synthetic aperture radar (SAR) data provide excellent spatial resolution, but topography in areas of surface change (from vegetation swaying in the wind to physical changes in the landscape) between radar passes cannot be imaged. The German Space Agency's TanDEM-X satellite system, however, solves this issue by simultaneously acquiring SAR data of the surface using a pair of orbiting satellites, thereby removing temporal change as a complicating factor in SAR-based topographic mapping. TanDEM-X measurements have demonstrated exceptional value in mapping the topography of volcanic environments in as-yet limited applications. The data provide excellent resolution (down to ~3-m pixel size) and are useful for updating topographic data at volcanoes where surface change has occurred since the most recent topographic dataset was collected. Such data can be used for applications ranging from correcting radar interferograms for topography, to modeling flow pathways in support of hazards mitigation. The most valuable contributions, however, relate to calculating volume changes related to eruptive activity. For example, limited datasets have provided critical measurements of lava dome growth and collapse at volcanoes including Merapi (Indonesia), Colima (Mexico), and Soufriere Hills (Montserrat), and of basaltic lava flow emplacement at Tolbachik (Kamchatka), Etna (Italy), and Kīlauea (Hawai`i). With topographic data spanning an eruption, it is possible to calculate eruption rates - information that might not otherwise be available

  11. The HELI-DEM model estimation

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Caldera, S.; Carcano, L.; Lucchese, A.; Negretti, M.; Sansò, F.; Triglione, D.; Visconti, M. G.

    2014-04-01

    Global DEMs are fundamental for global applications and are necessary also at the local scale, in regions where local models are not available. Local DEMs are preferred when they are available and if are characterized by better accuracies and resolutions. In general, two problems arise. Firstly, an interest region could be patched by several partly overlapping DEMs that present similar accuracies and spatial resolutions: they should be merged in a unified model. Moreover, even when the interest region is covered by one unified DEM, local DEMs with better accuracy could be available and should be used to locally improve it. All these problems have been addressed within HELI-DEM project. HELI-DEM (HELvetia-Italy Digital Elevation Model) is a project that has been funded by the European Regional Development Fund (ERDF) within the Italy-Switzerland cooperation program. It started in 2010 and finished at the end of 2013. The involved institutions in the project were Fondazione Politecnico di Milano, Politecnico di Milano, Politecnico di Torino, Regione Lombardia, Regione Piemonte and Scuola Universitaria della Svizzera Italiana. One specific aim of the project was the creation and the publication of a unified Digital Elevation Model for the part of the Alps between Italy and Switzerland. The interest area is prevalently mountainous, with heights that range from about 200 m to 4600 m. Three low Resolution DTMs (20-25-50 m of resolution) are available that partly overlap and patch the whole project area: they are characterized by accuracies of some meters. Also High Resolution DTMs (1-5 m) are available: they have accuracies of some decimeters but cover limited areas of the project. The various models are available in different reference frames (the European ETRF89 and the Italian Roma40) and are gridded either in cartographic or geographic coordinates. Before merging them, a validation of the input data has been performed in three steps: cross validation of LR DTMs

  12. International genomic evaluation methods for dairy cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background Genomic evaluations are rapidly replacing traditional evaluation systems used for dairy cattle selection. Economies of scale in genomics promote cooperation across country borders. Genomic information can be transferred across countries using simple conversion equations, by modifying mult...

  13. a Near-Global Bare-Earth dem from Srtm

    NASA Astrophysics Data System (ADS)

    Gallant, J. C.; Read, A. M.

    2016-06-01

    The near-global elevation product from NASA's Shuttle Radar Topographic Mission (SRTM) has been widely used since its release in 2005 at 3 arcsecond resolution and the release of the 1 arcsecond version in late 2014 means that the full potential of the SRTM DEM can now be realised. However the routine use of SRTM for analytical purposes such as catchment hydrology, flood inundation, habitat mapping and soil mapping is still seriously impeded by the presence of artefacts in the data, primarily the offsets due to tree cover and the random noise. This paper describes the algorithms being developed to remove those offsets, based on the methods developed to produce the Australian national elevation model from SRTM data. The offsets due to trees are estimated using the GlobeLand30 (National Geomatics Center of China) and Global Forest Change (University of Maryland) products derived from Landsat, along with the ALOS PALSAR radar image data (JAXA) and the global forest canopy height map (NASA). The offsets are estimated using several processes and combined to produce a single continuous tree offset layer that is subtracted from the SRTM data. The DEM products will be made freely available on completion of the first draft product, and the assessment of that product is expected to drive further improvements to the methods.

  14. Integration of SAR and DEM data: Geometrical considerations

    NASA Technical Reports Server (NTRS)

    Kropatsch, Walter G.

    1991-01-01

    General principles for integrating data from different sources are derived from the experience of registration of SAR images with digital elevation models (DEM) data. The integration consists of establishing geometrical relations between the data sets that allow us to accumulate information from both data sets for any given object point (e.g., elevation, slope, backscatter of ground cover, etc.). Since the geometries of the two data are completely different they cannot be compared on a pixel by pixel basis. The presented approach detects instances of higher level features in both data sets independently and performs the matching at the high level. Besides the efficiency of this general strategy it further allows the integration of additional knowledge sources: world knowledge and sensor characteristics are also useful sources of information. The SAR features layover and shadow can be detected easily in SAR images. An analytical method to find such regions also in a DEM needs in addition the parameters of the flight path of the SAR sensor and the range projection model. The generation of the SAR layover and shadow maps is summarized and new extensions to this method are proposed.

  15. Multidisciplinary eHealth Survey Evaluation Methods

    ERIC Educational Resources Information Center

    Karras, Bryant T.; Tufano, James T.

    2006-01-01

    This paper describes the development process of an evaluation framework for describing and comparing web survey tools. We believe that this approach will help shape the design, development, deployment, and evaluation of population-based health interventions. A conceptual framework for describing and evaluating web survey systems will enable the…

  16. Democratizing Evaluation: Meanings and Methods from Practice.

    ERIC Educational Resources Information Center

    Ryan, Katherine E.; Johnson, Trav D.

    2000-01-01

    Uses the results of an instrumental case study to identify issues connected to evaluation participation and its representation and the role of the internal evaluator in democratic, deliberative evaluation. Identified direct participation and participation by representation, sanctioned or unsanctioned representation, and extrinsic and intrinsic…

  17. Mapping hydrological environments in central Amazonia: ground validation and surface model based on SRTM DEM data corrected for deforestation

    NASA Astrophysics Data System (ADS)

    Moulatlet, G. M.; Rennó, C. D.; Costa, F. R. C.; Emilio, T.; Schietti, J.

    2015-03-01

    One of the most important freely available digital elevation models (DEMs) for Amazonia is the one obtained by the Shuttle Radar Topography Mission (SRTM). However, since SRTM tends to represent the vegetation surface instead of the ground surface, the broad use of SRTM DEM as a framework for terrain description in Amazonia is hampered by the presence of deforested areas. We present here two data sets: (1) a deforestation-corrected SRTM DEM for the interfluve between the Purus and Madeira rivers, in central Amazonia, which passed through a careful identification of different environments and has deforestation features corrected by a new method of increasing pixel values of the DEM (Rennó, 2009); and (2) a set of 18 hydrological-topographic descriptors based on the corrected SRTM DEM. Deforestation features are related with the opening of an 800 km road in the central part of the interfluve and occupancy of its vicinity. We used topographic profiles from the pristine forest to the deforested feature to evaluate the recovery of the original canopy coverage by minimizing canopy height variation (corrections ranged from 1 to 38 m). The hydrological-topographic description was obtained by the Height Above the Nearest Drainage (HAND) algorithm, which normalizes the terrain elevation (above sea level) by the elevation of the nearest hydrologically connected drainage. The validation of the HAND data set was done by in situ hydrological description of 110 km of walking trails also available in this data set. The new SRTM DEM expands the applicability of SRTM data for landscape modelling; the data sets of hydrological features based on topographic modelling are undoubtedly appropriate for ecological modelling and an important contribution to environmental mapping of Amazonia. The deforestation-corrected SRTM DEM is available at http://ppbio.inpa.gov.br/knb/metacat/naman.318.3/ppbio; the

  18. Volume changes during the 2010 Merapi eruption calculated from TanDEM-X interferometry

    NASA Astrophysics Data System (ADS)

    Kubanek, Julia; Westerhaus, Malte; Heck, Bernhard

    2013-04-01

    NE-SE and NW-SW sectors of the edifice, respectively. We use the DEMs to give values of the volume change at the summit caused by the 2010 eruption. As the TanDEM-X mission is an innovative mission, the present study serves as a test to employ data of a new satellite mission in volcano research. An error analysis of the DEMs to evaluate the volume quantifications was therefore also conducted.

  19. Monitoring lava dome changes by means of differential DEMs from TanDEM-X interferometry: Examples from Merapi, Indonesia and Volcán de Colima, Mexico

    NASA Astrophysics Data System (ADS)

    Kubanek, J.; Westerhaus, M.; Heck, B.

    2013-12-01

    derived by TanDEM-X interferometry taken before and after the eruption. Our results reveal that the eruption had led to a topographic change of up to 200 m in the summit area of Merapi. We further show the ability of the TanDEM-X data to observe much smaller topographic changes using Volcán de Colima as second test site. An explosion at the crater rim signaled the end of magma ascent in June 2011. The bistatic TanDEM-X data give important information on this explosion as we can observe topographic changes of up to 20 m and less in the summit area when comparing datasets taken before and after the event. We further analyzed datasets from the beginning of the year 2013 when Colima got active again after a dormant period. Our results indicate that repeated DEMs with great detail and good accuracy are obtainable, enabling a quantitative estimation of volume changes in the summit area of the volcano. As the TanDEM-X mission is an innovative mission, the present study serves as a test to employ data of a new satellite mission in volcano research. An error analysis of the DEMs to evaluate the volume quantifications was therefore also conducted.

  20. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO

  1. Co-seismic landslide topographic analysis based on multi-temporal DEM-A case study of the Wenchuan earthquake.

    PubMed

    Ren, Zhikun; Zhang, Zhuqi; Dai, Fuchu; Yin, Jinhui; Zhang, Huiping

    2013-01-01

    Hillslope instability has been thought to be one of the most important factors for landslide susceptibility. In this study, we apply geomorphic analysis using multi-temporal DEM data and shake intensity analysis to evaluate the topographic characteristics of the landslide areas. There are many geomorphologic analysis methods such as roughness, slope aspect, which are also as useful as slope analysis. The analyses indicate that most of the co-seismic landslides occurred in regions with roughness, hillslope and slope aspect of >1.2, >30, and between 90 and 270, respectively. However, the intersection regions from the above three methods are more accurate than that derived by applying single topographic analysis method. The ground motion data indicates that the co-seismic landslides mainly occurred on the hanging wall side of Longmen Shan Thrust Belt within the up-down and horizontal peak ground acceleration (PGA) contour of 150 PGA and 200 gal, respectively. The comparisons of pre- and post-earthquake DEM data indicate that the medium roughness and slope increased, the roughest and steepest regions decreased after the Wenchuan earthquake. However, slope aspects did not even change. Our results indicate that co-seismic landslides mainly occurred at specific regions of high roughness, southward and steep sloping areas under strong ground motion. Co-seismic landslides significantly modified the local topography, especially the hillslope and roughness. The roughest relief and steepest slope are significantly smoothed; however, the medium relief and slope become rougher and steeper, respectively. PMID:24171155

  2. DEM Simulated Results And Seismic Interpretation of the Red River Fault Displacements in Vietnam

    NASA Astrophysics Data System (ADS)

    Bui, H. T.; Yamada, Y.; Matsuoka, T.

    2005-12-01

    The Song Hong basin is the largest Tertiary sedimentary basin in Viet Nam. Its onset is approximately 32 Ma ago since the left-lateral displacement of the Red River Fault commenced. Many researches on structures, formation and tectonic evolution of the Song Hong basin have been carried out for a long time but there are still remained some problems that needed to put into continuous discussion such as: magnitude of the displacements, magnitude of movement along the faults, the time of tectonic inversion and right lateral displacement. Especially the mechanism of the Song Hong basin formation is still in controversy with many different hypotheses due to the activation of the Red River fault. In this paper PFC2D based on the Distinct Element Method (DEM) was used to simulate the development of the Red River fault system that controlled the development of the Song Hong basin from the onshore to the elongated portion offshore area. The numerical results show the different parts of the stress field such as compress field, non-stress field, pull-apart field of the dynamic mechanism along the Red River fault in the onshore area. This propagation to the offshore area is partitioned into two main branch faults that are corresponding to the Song Chay and Song Lo fault systems and said to restrain the east and west flanks of the Song Hong basin. The simulation of the Red River motion also showed well the left lateral displacement since its onset. Though it is the first time the DEM method was applied to study the deformation and geodynamic evolution of the Song Hong basin, the results showed reliably applied into the structural configuration evaluation of the Song Hong basin.

  3. DEM Modelling of Non-linear Viscoelastic Stress Waves

    NASA Astrophysics Data System (ADS)

    Wang, Wenqiang; Tang, Zhiping; Horie, Yasuyuki

    2001-06-01

    A DEM(Discrete Element Method) simulation of nonlinear viscoelastic stress wave problems is carried out. The interaction forces among elements are described using a model in which neighbor elements are linked by a nonlinear spring and a certain number of Maxwell components in parallel. By making use of exponential relaxation moduli, it is shown that numerical computation of the convolution integral does not require storing and repeatedly calculating strain history, and can reduce the computational cost dramatically. To validate the viscoelastic DM2 code, stress wave propagation in a Maxwell rod with one end subjected to a constant stress loading is simulated. Results excellently fit those from the characteristics calculation. Satisfactory results are also obtained in the simulation of one-dimensional plane wave in a plastic bonded explosive. The code is then used to investigate the problem of meso-scale damage in this explosive under shock loading. Results not only show "compression damage", but also reveal a complex damage evolution. They demonstrate a unique capability of DEM in modeling heterogeneous materials.

  4. DEM, tide and velocity over sulzberger ice shelf, West Antarctica

    USGS Publications Warehouse

    Baek, S.; Shum, C.K.; Lee, H.; Yi, Y.; Kwoun, Oh-Ig; Lu, Zhiming; Braun, Andreas

    2005-01-01

    Arctic and Antarctic ice sheets preserve more than 77% of the global fresh water and could raise global sea level by several meters if completely melted. Ocean tides near and under ice shelves shifts the grounding line position significantly and are one of current limitations to study glacier dynamics and mass balance. The Sulzberger ice shelf is an area of ice mass flux change in West Antarctica and has not yet been well studied. In this study, we use repeat-pass synthetic aperture radar (SAR) interferometry data from the ERS-1 and ERS-2 tandem missions for generation of a high-resolution (60-m) Digital Elevation Model (DEM) including tidal deformation detection and ice stream velocity of the Sulzberger Ice Shelf. Other satellite data such as laser altimeter measurements with fine foot-prints (70-m) from NASA's ICESat are used for validation and analyses. The resulting DEM has an accuracy of-0.57??5.88 m and is demonstrated to be useful for grounding line detection and ice mass balance studies. The deformation observed by InSAR is found to be primarily due to ocean tides and atmospheric pressure. The 2-D ice stream velocities computed agree qualitatively with previous methods on part of the Ice Shelf from passive microwave remote-sensing data (i.e., LANDSAT). ?? 2005 IEEE.

  5. Simulation of triaxial response of granular materials by modified DEM

    NASA Astrophysics Data System (ADS)

    Wang, XiaoLiang; Li, JiaChun

    2014-12-01

    A modified discrete element method (DEM) with rolling effect taken into consideration is developed to examine macroscopic behavior of granular materials in this study. Dimensional analysis is firstly performed to establish the relationship between macroscopic mechanical behavior, mesoscale contact parameters at particle level and external loading rate. It is found that only four dimensionless parameters may govern the macroscopic mechanical behavior in bulk. The numerical triaxial apparatus was used to study their influence on the mechanical behavior of granular materials. The parametric study indicates that Poisson's ratio only varies with stiffness ratio, while Young's modulus is proportional to contact modulus and grows with stiffness ratio, both of which agree with the micromechanical model. The peak friction angle is dependent on both inter-particle friction angle and rolling resistance. The dilatancy angle relies on inter-particle friction angle if rolling stiffness coefficient is sufficiently large. Finally, we have recommended a calibration procedure for cohesionless soil, which was at once applied to the simulation of Chende sand using a series of triaxial compression tests. The responses of DEM model are shown in quantitative agreement with experiments. In addition, stress-strain response of triaxial extension was also obtained by numerical triaxial extension tests.

  6. GPS-Based Precision Baseline Reconstruction for the TanDEM-X SAR-Formation

    NASA Technical Reports Server (NTRS)

    Montenbruck, O.; vanBarneveld, P. W. L.; Yoon, Y.; Visser, P. N. A. M.

    2007-01-01

    The TanDEM-X formation employs two separate spacecraft to collect interferometric Synthetic Aperture Radar (SAR) measurements over baselines of about 1 km. These will allow the generation ofa global Digital Elevation Model (DEM) with an relative vertical accuracy of 2-4 m and a 10 m ground resolution. As part of the ground processing, the separation of the SAR antennas at the time of each data take must be reconstructed with a 1 mm accuracy using measurements from two geodetic grade GPS receivers. The paper discusses the TanDEM-X mission as well as the methods employed for determining the interferometric baseline with utmost precision. Measurements collected during the close fly-by of the two GRACE satellites serve as a reference case to illustrate the processing concept, expected accuracy and quality control strategies.

  7. DEM analyses of shear behaviour of rock joints by a novel bond contact model

    NASA Astrophysics Data System (ADS)

    Jiang, M. J.; Liu, J.; Sun, C.; Chen, H.

    2015-09-01

    The failure of rock joints is one of the potential causes for the local and general rock instability, which may trigger devastating geohazards such as landslide. In this paper, the Distinct Element Method (DEM) featured by a novel bond contact model was utilized to simulate shear behaviour of centre/non-coplanar rock joints. The DEM results show that the complete shear behaviour of jointed rock includes four stages: elastic shearing phase, crack propagation, the failure of rock bridges and the through-going discontinuity. The peak shear strength of centre joint increases as the joint connectivity rate decreases. For intermittent noncoplanar rock joints, as the inclination of the rock joints increases, its shear capacity decreases when the inclination angle is negative while increase when positive. Comparison with the experimental results proves the capability of this DEM model in capturing the mechanical properties of the jointed rocks.

  8. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

  9. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  10. Sensitivity Analysis of Uav-Photogrammetry for Creating Digital Elevation Models (dem)

    NASA Astrophysics Data System (ADS)

    Rock, G.; Ries, J. B.; Udelhoven, T.

    2011-09-01

    This study evaluates the potential that lies in the photogrammetric processing of aerial images captured by unmanned aerial vehicles. UAV-Systems have gained increasing attraction during the last years. Miniaturization of electronic components often results in a reduction of quality. Especially the accuracy of the GPS/IMU navigation unit and the camera are of the utmost importance for photogrammetric evaluation of aerial images. To determine the accuracy of digital elevation models (DEMs), an experimental setup was chosen similar to the situation of data acquisition during a field campaign. A quarry was chosen to perform the experiment, because of the presence of different geomorphologic units, such as vertical walls, piles of debris, vegetation and even areas. In the experimental test field, 1042 ground control points (GCPs) were placed, used as input data for the photogrammetric processing and as high accuracy reference data for evaluating the DEMs. Further, an airborne LiDAR dataset covering the whole quarry and additional 2000 reference points, measured by total station, were used as ground truth data. The aerial images were taken using a MAVinci Sirius I - UAV equipped with a Canon 300D as imaging system. The influence of the number of GCPs on the accuracy of the indirect sensor orientation and the absolute deviation's dependency on different parameters of the modelled DEMs was subject of the investigation. Nevertheless, the only significant factor concerning the DEMs accuracy that could be isolated was the flying height of the UAV.

  11. Method for evaluation of laboratory craters using crater detection algorithm for digital topography data

    NASA Astrophysics Data System (ADS)

    Salamunićcar, Goran; Vinković, Dejan; Lončarić, Sven; Vučina, Damir; Pehnec, Igor; Vojković, Marin; Gomerčić, Mladen; Hercigonja, Tomislav

    In our previous work the following has been done: (1) the crater detection algorithm (CDA) based on digital elevation model (DEM) has been developed and the GT-115225 catalog has been assembled [GRS, 48 (5), in press, doi:10.1109/TGRS.2009.2037750]; and (2) the results of comparison between explosion-induced laboratory craters in stone powder surfaces and GT-115225 have been presented using depth/diameter measurements [41stLPSC, Abstract #1428]. The next step achievable using the available technology is to create 3D scans of such labo-ratory craters, in order to compare different properties with simple Martian craters. In this work, we propose a formal method for evaluation of laboratory craters, in order to provide objective, measurable and reproducible estimation of the level of achieved similarity between these laboratory and real impact craters. In the first step, the section of MOLA data for Mars (or SELENE LALT for Moon) is replaced with one or several 3D-scans of laboratory craters. Once embedment was done, the CDA can be used to find out whether this laboratory crater is similar enough to real craters, as to be recognized as a crater by the CDA. The CDA evaluation using ROC' curve represents how true detection rate (TDR=TP/(TP+FN)=TP/GT) depends on the false detection rate (FDR=FP/(TP+FP)). Using this curve, it is now possible to define the measure of similarity between laboratory and real impact craters, as TDR or FDR value, or as a distance from the bottom-right origin of the ROC' curve. With such an approach, the reproducible (formally described) method for evaluation of laboratory craters is provided.

  12. An Investigation of Transgressive Deposits in Late Pleistocene Lake Bonneville using GPR and UAV-produced DEMs.

    NASA Astrophysics Data System (ADS)

    Schide, K.; Jewell, P. W.; Oviatt, C. G.; Jol, H. M.

    2015-12-01

    Lake Bonneville was the largest of the Pleistocene pluvial lakes that once filled the Great Basin of the interior western United States. Its two most prominent shorelines, Bonneville and Provo, are well documented but many of the lake's intermediate shoreline features have yet to be studied. These transgressive barriers and embankments mark short-term changes in the regional water budget and thus represent a proxy for local climate change. The internal and external structures of these features are analyzed using the following methods: ground penetrating radar, 5 meter auto-correlated DEMs, 1-meter DEMs generated from LiDAR, high-accuracy handheld GPS, and 3D imagery collected with an unmanned aerial vehicle. These methods in mapping, surveying, and imaging provide a quantitative analysis of regional sediment availability, transportation, and deposition as well as changes in wave and wind energy. These controls help define climate thresholds and rates of landscape evolution in the Great Basin during the Pleistocene that are then evaluated in the context of global climate change.

  13. Precise Global DEM Generation by ALOS PRISM

    NASA Astrophysics Data System (ADS)

    Tadono, T.; Ishida, H.; Oda, F.; Naito, S.; Minakawa, K.; Iwamoto, H.

    2014-04-01

    The Japan Aerospace Exploration Agency (JAXA) generated the global digital elevation/surface model (DEM/DSM) and orthorectified image (ORI) using the archived data of the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) onboard the Advanced Land Observing Satellite (ALOS, nicknamed "Daichi"), which was operated from 2006 to 2011. PRISM consisted of three panchromatic radiometers that acquired along-track stereo images. It had a spatial resolution of 2.5 m in the nadir-looking radiometer and achieved global coverage, making it a suitable potential candidate for precise global DSM and ORI generation. In the past 10 years or so, JAXA has conducted the calibration of the system corrected standard products of PRISM in order to improve absolute accuracies as well as to validate the high-level products such as DSM and ORI. In this paper, we introduce an overview of the global DEM/DSM dataset generation project, including a summary of ALOS and PRISM, in addition to the global data archive status. It is also necessary to consider data processing strategies, since the processing capabilities of the level 1 standard product and the high-level products must be developed in terms of both hardware and software to achieve the project aims. The automatic DSM/ORI processing software and its test processing results are also described.

  14. Evaluation of temperament scoring methods for beef cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this study was to evaluate methods of temperament scoring. Crossbred (n=228) calves were evaluated for temperament by an individual evaluator at weaning by two methods of scoring: 1) pen score (1 to 5 scale, with higher scores indicating increasing degree of nervousness, aggressiven...

  15. A Ranking Method for Evaluating Constructed Responses

    ERIC Educational Resources Information Center

    Attali, Yigal

    2014-01-01

    This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…

  16. GENERAL METHODS FOR REMEDIAL PERFORMANCE EVALUATIONS

    EPA Science Inventory

    This document was developed by an EPA-funded project to explain technical considerations and principles necessary to evaluated the performance of ground-water contamination remediations at hazardous waste sites. This is neither a "cookbook", nor an encyclopedia of recommended fi...

  17. Multi-Method Evaluation of College Teaching

    ERIC Educational Resources Information Center

    Algozzine, Bob; Beattie, John; Bray, Marty; Flowers, Claudia; Gretes, John; Mohanty, Ganesh; Spooner, Fred

    2010-01-01

    Student evaluation of instruction in college and university courses has been a routine and mandatory part of undergraduate and graduate education for some time. A major shortcoming of the process is that it relies exclusively on the opinions or qualitative judgments of students rather than on assessing the learning or transfer of knowledge that…

  18. Discrete element modelling (DEM) input parameters: understanding their impact on model predictions using statistical analysis

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.

    2015-09-01

    Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.

  19. Dem Extraction from CHANG'E-1 Lam Data by Surface Skinning Technology

    NASA Astrophysics Data System (ADS)

    Zhang, X.-B.; Zhang, W.-M.

    2011-08-01

    DEM is a digital model or 3-D representation of a terrain's surface and it is created from terrain elevation data. The main models for DEM extraction based on Lidar data or Laser Altimeter data currently use the idea that point cloud is scattered, such as regular grid model, TIN model and contour model. Essentially, in these above methods, the discrete points are interpolated into regular grid data and irregular grid data. In fact, point cloud generated by Laser Altimeter is not totally scattered, but have some regularity. In this paper, to utilize this regularity, the proposed method adopts surface skinning technology to generate DEM from Chang'E-1 Laser Altimeter data. The surface skinning technology is widely used in the field of mechanical engineering. Surface skinning is the process of passing a smooth surface through a set of curves called sectional curves, which, in general, may not be compatible. In the process of generating section line, a need for attention is that it needs to use curvature method to get a set of characteristic points, and these feature points were used to subdivide segment; the next step is generating several curves on some key places. These curves describe the shape of the curved surface. The last step is to generate a curved surface that through these curves. The result shows that, this idea is feasible, useful and it provides a novel way to generate accurate DEM.

  20. DEM time series of an agricultural watershed

    NASA Astrophysics Data System (ADS)

    Pineux, Nathalie; Lisein, Jonathan; Swerts, Gilles; Degré, Aurore

    2014-05-01

    In agricultural landscape soil surface evolves notably due to erosion and deposition phenomenon. Even if most of the field data come from plot scale studies, the watershed scale seems to be more appropriate to understand them. Currently, small unmanned aircraft systems and images treatments are improving. In this way, 3D models are built from multiple covering shots. When techniques for large areas would be to expensive for a watershed level study or techniques for small areas would be too time consumer, the unmanned aerial system seems to be a promising solution to quantify the erosion and deposition patterns. The increasing technical improvements in this growth field allow us to obtain a really good quality of data and a very high spatial resolution with a high Z accuracy. In the center of Belgium, we equipped an agricultural watershed of 124 ha. For three years (2011-2013), we have been monitoring weather (including rainfall erosivity using a spectropluviograph), discharge at three different locations, sediment in runoff water, and watershed microtopography through unmanned airborne imagery (Gatewing X100). We also collected all available historical data to try to capture the "long-term" changes in watershed morphology during the last decades: old topography maps, soil historical descriptions, etc. An erosion model (LANDSOIL) is also used to assess the evolution of the relief. Short-term evolution of the surface are now observed through flights done at 200m height. The pictures are taken with a side overlap equal to 80%. To precisely georeference the DEM produced, ground control points are placed on the study site and surveyed using a Leica GPS1200 (accuracy of 1cm for x and y coordinates and 1.5cm for the z coordinate). Flights are done each year in December to have an as bare as possible ground surface. Specific treatments are developed to counteract vegetation effect because it is know as key sources of error in the DEM produced by small unmanned aircraft

  1. An evaluation of two VTO methods.

    PubMed

    Sample, L B; Sadowsky, P L; Bradley, E

    1998-10-01

    A sample of 34 growing Class II patients was used to assess the reliability of manual and computer-generated visual treatment objectives (VTOs) when compared with the actual treatment results. Skeletal, dental, and soft tissue measurements were performed on the VTO and on the posttreatment tracings. Using paired t-tests and Pearson correlation coefficients, comparisons were made between the VTO and posttreatment tracings. Both the manual and computer VTO methods were accurate when predicting skeletal changes that occurred during treatment. However, both methods were only moderately successful in forecasting dental and soft tissue alterations during treatment. Only slight differences were seen between the manual and computer VTO methods, with the computer being slightly more accurate with the soft tissue prediction. However, the differences between the two methods were not judged to be clinically significant. Overall, the prediction tracings were accurate to only a moderate degree, with marked individual variation evident throughout the sample. PMID:9770097

  2. Novel methods to evaluate fracture risk models

    PubMed Central

    Donaldson, M.G.; Cawthon, P. M.; Schousboe, J.T.; Ensrud, K.E.; Lui, L.Y.; Cauley, J.A.; Hillier, T.A.; Taylor, B.C.; Hochberg, M.C.; Bauer, D.C.; Cummings, S.R.

    2013-01-01

    Fracture prediction models help identify individuals at high risk who may benefit from treatment. Area Under the Curve (AUC) is used to compare prediction models. However, the AUC has limitations and may miss important differences between models. Novel reclassification methods quantify how accurately models classify patients who benefit from treatment and the proportion of patients above/below treatment thresholds. We applied two reclassification methods, using the NOF treatment thresholds, to compare two risk models: femoral neck BMD and age (“simple model”) and FRAX (”FRAX model”). The Pepe method classifies based on case/non-case status and examines the proportion of each above and below thresholds. The Cook method examines fracture rates above and below thresholds. We applied these to the Study of Osteoporotic Fractures. There were 6036 (1037 fractures) and 6232 (389 fractures) participants with complete data for major osteoporotic and hip fracture respectively. Both models for major osteoporotic fracture (0.68 vs. 0.69) and hip fracture (0.75 vs. 0.76) had similar AUCs. In contrast, using reclassification methods, each model classified a substantial number of women differently. Using the Pepe method, the FRAX model (vs. simple model), missed treating 70 (7%) cases of major osteoporotic fracture but avoided treating 285 (6%) non-cases. For hip fracture, the FRAX model missed treating 31 (8%) cases but avoided treating 1026 (18%) non-cases. The Cook method (both models, both fracture outcomes) had similar fracture rates above/below the treatment thresholds. Compared with the AUC, new methods provide more detailed information about how models classify patients. PMID:21351143

  3. [Evaluation of preventive measures using scientific methods].

    PubMed

    Göhlen, Britta; Bossmann, Hildegard

    2010-10-01

    The evaluation of preventive and health activities is due to ethical and financial aspects increasingly gaining importance. But how can this be assured on a scientifically high level? A tool that suits this purpose is Health Technology Assessment (HTA). Provided that the appropriate methodology is selected and scientific literature is evaluated, HTA can help to appraise the outcomes of preventive activities. The German Institute of Medical Documentation and Information publishes HTA reports on behalf of the Federal Health Ministry. These deal with topics related to prevention amongst others. Examples of the year 2009 are reports on the vaccination against human papilloma virus or on the nonmedicinal secondary prevention of coronary heart disease. PMID:20981592

  4. The Study on Educational Technology Abilities Evaluation Method

    NASA Astrophysics Data System (ADS)

    Jing, Duan

    The traditional methods used to evaluate the test, the test did not really measure that we want to measuring things. Test results and can not serve as a basis for evaluation, so it was worth the natural result of its evaluation of weighing. This system is full use of technical means of education, based on education, psychological theory, to evaluate the object-based, evaluation tools, evaluation of secondary teachers to primary and secondary school teachers in educational technology as the goal, using a variety of evaluation of side France, from various angles established an informal evaluation system.

  5. 10 CFR 963.16 - Postclosure suitability evaluation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... where there is a human intrusion as specified by 10 CFR 63.322. DOE will model the performance of the... 10 Energy 4 2010-01-01 2010-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will...

  6. 10 CFR 963.16 - Postclosure suitability evaluation method.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... where there is a human intrusion as specified by 10 CFR 63.322. DOE will model the performance of the... 10 Energy 4 2011-01-01 2011-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will...

  7. 10 CFR 963.16 - Postclosure suitability evaluation method.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... where there is a human intrusion as specified by 10 CFR 63.322. DOE will model the performance of the... 10 Energy 4 2014-01-01 2014-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will...

  8. Developmental Eye Movement (DEM) Test Norms for Mandarin Chinese-Speaking Chinese Children.

    PubMed

    Xie, Yachun; Shi, Chunmei; Tong, Meiling; Zhang, Min; Li, Tingting; Xu, Yaqin; Guo, Xirong; Hong, Qin; Chi, Xia

    2016-01-01

    The Developmental Eye Movement (DEM) test is commonly used as a clinical visual-verbal ocular motor assessment tool to screen and diagnose reading problems at the onset. No established norm exists for using the DEM test with Mandarin Chinese-speaking Chinese children. This study aims to establish the normative values of the DEM test for the Mandarin Chinese-speaking population in China; it also aims to compare the values with three other published norms for English-, Spanish-, and Cantonese-speaking Chinese children. A random stratified sampling method was used to recruit children from eight kindergartens and eight primary schools in the main urban and suburban areas of Nanjing. A total of 1,425 Mandarin Chinese-speaking children aged 5 to 12 years took the DEM test in Mandarin Chinese. A digital recorder was used to record the process. All of the subjects completed a symptomatology survey, and their DEM scores were determined by a trained tester. The scores were computed using the formula in the DEM manual, except that the "vertical scores" were adjusted by taking the vertical errors into consideration. The results were compared with the three other published norms. In our subjects, a general decrease with age was observed for the four eye movement indexes: vertical score, adjusted horizontal score, ratio, and total error. For both the vertical and adjusted horizontal scores, the Mandarin Chinese-speaking children completed the tests much more quickly than the norms for English- and Spanish-speaking children. However, the same group completed the test slightly more slowly than the norms for Cantonese-speaking children. The differences in the means were significant (P<0.001) in all age groups. For several ages, the scores obtained in this study were significantly different from the reported scores of Cantonese-speaking Chinese children (P<0.005). Compared with English-speaking children, only the vertical score of the 6-year-old group, the vertical-horizontal time

  9. Developmental Eye Movement (DEM) Test Norms for Mandarin Chinese-Speaking Chinese Children

    PubMed Central

    Tong, Meiling; Zhang, Min; Li, Tingting; Xu, Yaqin; Guo, Xirong; Hong, Qin; Chi, Xia

    2016-01-01

    The Developmental Eye Movement (DEM) test is commonly used as a clinical visual-verbal ocular motor assessment tool to screen and diagnose reading problems at the onset. No established norm exists for using the DEM test with Mandarin Chinese-speaking Chinese children. This study aims to establish the normative values of the DEM test for the Mandarin Chinese-speaking population in China; it also aims to compare the values with three other published norms for English-, Spanish-, and Cantonese-speaking Chinese children. A random stratified sampling method was used to recruit children from eight kindergartens and eight primary schools in the main urban and suburban areas of Nanjing. A total of 1,425 Mandarin Chinese-speaking children aged 5 to 12 years took the DEM test in Mandarin Chinese. A digital recorder was used to record the process. All of the subjects completed a symptomatology survey, and their DEM scores were determined by a trained tester. The scores were computed using the formula in the DEM manual, except that the “vertical scores” were adjusted by taking the vertical errors into consideration. The results were compared with the three other published norms. In our subjects, a general decrease with age was observed for the four eye movement indexes: vertical score, adjusted horizontal score, ratio, and total error. For both the vertical and adjusted horizontal scores, the Mandarin Chinese-speaking children completed the tests much more quickly than the norms for English- and Spanish-speaking children. However, the same group completed the test slightly more slowly than the norms for Cantonese-speaking children. The differences in the means were significant (P<0.001) in all age groups. For several ages, the scores obtained in this study were significantly different from the reported scores of Cantonese-speaking Chinese children (P<0.005). Compared with English-speaking children, only the vertical score of the 6-year-old group, the vertical

  10. Zusatz- und Weiterqualifikation nach dem Studium

    NASA Astrophysics Data System (ADS)

    Domnick, Ivonne

    Ist der Bachelor geschafft, stellt sich die Frage nach einer Weiterqualifizierung. Neben einem Einstieg ins Berufsleben kann auch ein Masterstudium eventuell weitere entscheidende Bonuspunkte für den Lebenslauf bringen. Mit Zusatzqualifikationen aus fachfremden Bereichen wie Betriebswirtschaft oder Marketing ist es für Naturwissenschaftler leichter, den Einstieg ins Berufsleben zu schaffen. Viele Arbeitgeber sehen gerade bei Naturwissenschaftlern eine Promotion gerne. Hier sollte genau abgewogen werden, ob sie innerhalb einer bestimmten Zeitspanne zu schaffen ist. Auch nach einem Einstieg in den Job lässt sich der Doktortitel unter Umständen noch nachholen. Ebenso ist eine Weiterbildung neben dem Beruf in Teilzeit oder in einem Fernkurs möglich. Zusätzlich gibt es viele mehrwöchige oder mehrmonatige Kurse privater Anbieter, in denen man BWL-Grundkenntnisse erwerben kann.

  11. Test methods for evaluating reformulated fuels

    SciTech Connect

    Croudace, M.C.

    1994-12-31

    The US Environmental Protection Agency (EPA) introduced regulations in the 1989 Clean Air Act Amendment governing the reformulation of gasoline and diesel fuels to improve air quality. These statutes drove the need for a fast and accurate method for analyzing product composition, especially aromatic and oxygenate content. The current method, gas chromatography, is slow, expensive, non portable, and requires a trained chemist to perform the analysis. The new mid-infrared spectroscopic method uses light to identify and quantify the different components in fuels. Each individual fuel component absorbs a specific wavelength of light depending on the molecule`s unique chemical structure. The quantity of light absorbed is proportional to the concentration of that fuel component in the mixture. The mid-infrared instrument has significant advantages; it is easy to use, rugged, portable, fully automated and cost effective. It can be used to measure multiple oxygenate or aromatic components in unknown fuel mixtures. Regulatory agencies have begun using this method in field compliance testing; petroleum refiners and marketers use it to monitor compliance, product quality and blending accuracy.

  12. EVALUATION OF POHC AND PIC SCREENING METHODS

    EPA Science Inventory

    A recurring theme in environmental work is the need to characterize emissions to the maximum extent at the minimum cost. Unfortunately, many projects have been carried out In the past with little thought or planning concerning the optimum application of analytical methods availab...

  13. Animal Methods for Evaluating Forage Quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Numerous methods are available that employ animals in the assessment of forage quality. Some of these procedures provide information needed to address very specific goals (e.g., monitoring protein adequacy), some serve as useful contributors to the efforts to accurately predict nutritive value, wher...

  14. Evaluation of Electrochemical Methods for Electrolyte Characterization

    NASA Technical Reports Server (NTRS)

    Heidersbach, Robert H.

    2001-01-01

    This report documents summer research efforts in an attempt to develop an electrochemical method of characterizing electrolytes. The ultimate objective of the characterization would be to determine the composition and corrosivity of Martian soil. Results are presented using potentiodynamic scans, Tafel extrapolations, and resistivity tests in a variety of water-based electrolytes.

  15. The Evaluation of Flammability Properties Regarding Testing Methods

    NASA Astrophysics Data System (ADS)

    Osvaldová, Linda Makovická; Gašpercová, Stanislava

    2015-12-01

    In this paper, we address the historical comparison methods with current methods for the assessment of flammability characteristics for materials an especially for wood, wood components and wooden buildings. Nowadays in European Union brings harmonization in evaluated of standards into each European country and try to make one concept of evaluated the flammability properties. In each European country to the one standard level which will be used by evaluation of materials regarding flammability. In our article we focused mainly on improving the evaluation methods in terms of flammability characteristics of using materials at building industry. In the article we present examples of different assessment methods at their own test methods in terms of fire prevention. On the base of old compared of materials by STN, BS and DIN methods for testing materials on fire and new methods of evaluating the flammability properties regarding EU standards before and after starting the flash over.

  16. Performance Evaluation Methods for Assistive Robotic Technology

    NASA Astrophysics Data System (ADS)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  17. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  18. Extraction of Hydrological Proximity Measures from DEMs using Parallel Processing

    SciTech Connect

    Tesfa, Teklu K.; Tarboton, David G.; Watson, Daniel W.; Schreuders, Kimberly A.; Baker, Matthew M.; Wallace, Robert M.

    2011-12-01

    Land surface topography is one of the most important terrain properties which impact hydrological, geomorphological, and ecological processes active on a landscape. In our previous efforts to develop a soil depth model based upon topographic and land cover variables, we extracted a set of hydrological proximity measures (HPMs) from a Digital Elevation Model (DEM) as potential explanatory variables for soil depth. These HPMs may also have other, more general modeling applicability in hydrology, geomorphology and ecology, and so are described here from a general perspective. The HPMs we derived are variations of the distance up to ridge points (cells with no incoming flow) and variations of the distance down to stream points (cells with a contributing area greater than a threshold), following the flow path. These HPMs were computed using the D-infinity flow model that apportions flow between adjacent neighbors based on the direction of steepest downward slope on the eight triangular facets constructed in a 3 x 3 grid cell window using the center cell and each pair of adjacent neighboring grid cells in turn. The D-infinity model typically results in multiple flow paths between 2 points on the topography, with the result that distances may be computed as the minimum, maximum or average of the individual flow paths. In addition, each of the HPMs, are calculated vertically, horizontally, and along the land surface. Previously, these HPMs were calculated using recursive serial algorithms which suffered from stack overflow problems when used to process large datasets, limiting the size of DEMs that could be analyzed using that method to approximately 7000 x 7000 cells. To overcome this limitation, we developed a message passing interface (MPI) parallel approach for calculating these HPMs. The parallel algorithms of the HPMs spatially partition the input grid into stripes which are each assigned to separate processes for computation. Each of those processes then uses a

  19. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  20. Synergy of Image and Digital Elevation Models (DEMS) Information for Virtual Reality

    NASA Astrophysics Data System (ADS)

    Maire, C.; Datcu, M.

    2004-09-01

    In the framework of 3D visualization and real-time rendering of large remote sensing image databases, several signal processing techniques are presented and evaluated to filter/enhance SAR Digital Elevation Models (DEMs). Through the SRTM DEM, the interest of InSAR data for such applications is illustrated. A non stationary bayesian filter is presented to remove noise and small artefacts which pervade the SAR DEM while preserving structures and information content. Results obtained are very good, nevertheless large artefacts cannot be filtered and some artefacts remain. Therefore, image information have to be inserted to produce more realistic views. This second step is done by using a segmentation algorithm on the image data. By a topology analysis, the extracted objects are classified/stored in a tree structure to describe the topologic relations between the objects and reflect their interdependencies. An interactive learning procedure is done through a Graphical User Interface to link the signal classes to the semantic ones, i.e. to include human knowledge in the system. The selected information in form of objets are merged/fused in the DEM by assigning regularisation constraints.

  1. Evaluations of Three Methods for Remote Training

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Chmielewski, C.; Pandya, A.; Adolf, J.; Whitmore, M.; Berman, A.; Maida, J.

    1999-01-01

    Long duration space missions require a change in training methods and technologies. For Shuttle missions, crew members could train for all the planned procedures, and carry documentation of planned procedures for a variety of contingencies. As International Space Station (ISS) missions of three months or longer are carried out, many more tasks will need to be performed for which little or no training was received prior to launch. Eventually, exploration missions will last several years, and communications with Earth will have long time delays or be impossible at times. This series of three studies was performed to identify the advantages and disadvantages of three types of training for self-instruction: video-conferencing; multimedia; and virtual reality. These studies each compared two types of training methods, on two different types of tasks. In two of the studies, the subject's were in an isolated, confined environment analogous to space flight; the third study was performed in a laboratory.

  2. Evaluation of toothbrush disinfection via different methods.

    PubMed

    Basman, Adil; Peker, Ilkay; Akca, Gulcin; Alkurt, Meryem Toraman; Sarikir, Cigdem; Celik, Irem

    2016-01-01

    The aim of this study was to compare the efficacy of using a dishwasher or different chemical agents, including 0.12% chlorhexidine gluconate, 2% sodium hypochlorite (NaOCl), a mouthrinse containing essential oils and alcohol, and 50% white vinegar, for toothbrush disinfection. Sixty volunteers were divided into five experimental groups and one control group (n = 10). Participants brushed their teeth using toothbrushes with standard bristles, and they disinfected the toothbrushes according to instructed methods. Bacterial contamination of the toothbrushes was compared between the experimental groups and the control group. Data were analyzed by Kruskal-Wallis and Duncan's multiple range tests, with 95% confidence intervals for multiple comparisons. Bacterial contamination of toothbrushes from individuals in the experimental groups differed from those in the control group (p < 0.05). The most effective method for elimination of all tested bacterial species was 50% white vinegar, followed in order by 2% NaOCl, mouthrinse containing essential oils and alcohol, 0.12% chlorhexidine gluconate, dishwasher use, and tap water (control). The results of this study show that the most effective method for disinfecting toothbrushes was submersion in 50% white vinegar, which is cost-effective, easy to access, and appropriate for household use. PMID:26676193

  3. Evaluation criteria and test methods for electrochromic windows

    SciTech Connect

    Czanderna, A.W. ); Lampert, C.M. )

    1990-07-01

    Report summarizes the test methods used for evaluating electrochromic (EC) windows, and summarizes what is known about degradation of their performance, and recommends methods and procedures for advancing EC windows for buildings applications. 77 refs., 13 figs., 6 tabs.

  4. Method of evaluating subsurface fracturing operations

    SciTech Connect

    Soliman, M.Y.

    1989-06-06

    This patent describes a method of determining parameters of a subsurface operation fracturing an earth formation, comprising: fracturing the formation with a fracturing fluid; determining a first pressure decline value representative of the observed pressure decline of the fractured formation over a time interval. The first pressure decline value functionally related to the properties of the fracturing fluid during the fracturing of the formation; determining a second pressure decline value representative of the pressure decline which should have been observed if the fracturing fluid was incompressible; and determining the parameters of the fracturing operation in response to the pressure decline value.

  5. Organic ion exchange resin separation methods evaluation

    SciTech Connect

    Witwer, K.S.

    1998-05-27

    This document describes testing to find effective methods to separate Organic Ion Exchange Resin (OIER) from a sludge simulant. This task supports a comprehensive strategy for treatment and processing of K-Basin sludge. The simulant to be used resembles sludge that has accumulated in the 105KE and 105KW Basins in the 1OOK area of the Hanford Site. The sludge is an accumulation of fuel element corrosion products, organic and inorganic ion exchange materials, canister gasket materials, iron and aluminum corrosion products, sand, dirt, and other minor amounts of organic matter.

  6. Lava emplacements at Shiveluch volcano (Kamchatka) from June 2011 to September 2014 observed by TanDEM-X SAR-Interferometry

    NASA Astrophysics Data System (ADS)

    Heck, Alexandra; Kubanek, Julia; Westerhaus, Malte; Gottschämmer, Ellen; Heck, Bernhard; Wenzel, Friedemann

    2016-04-01

    As part of the Ring of Fire, Shiveluch volcano is one of the largest and most active volcanoes on Kamchatka Peninsula. During the Holocene, only the southern part of the Shiveluch massive was active. Since the last Plinian eruption in 1964, the activity of Shiveluch is characterized by periods of dome growth and explosive eruptions. The recent active phase began in 1999 and continues until today. Due to the special conditions at active volcanoes, such as smoke development, danger of explosions or lava flows, as well as poor weather conditions and inaccessible area, it is difficult to observe the interaction between dome growth, dome destruction, and explosive eruptions in regular intervals. Consequently, a reconstruction of the eruption processes is hardly possible, though important for a better understanding of the eruption mechanism as well as for hazard forecast and risk assessment. A new approach is provided by the bistatic radar data acquired by the TanDEM-X satellite mission. This mission is composed of two nearly identical satellites, TerraSAR-X and TanDEM-X, flying in a close helix formation. On one hand, the radar signals penetrate clouds and partially vegetation and snow considering the average wavelength of about 3.1 cm. On the other hand, in comparison with conventional InSAR methods, the bistatic radar mode has the advantage that there are no difficulties due to temporal decorrelation. By interferometric evaluation of the simultaneously recorded SAR images, it is possible to calculate high-resolution digital elevation models (DEMs) of Shiveluch volcano and its surroundings. Furthermore, the short recurrence interval of 11 days allows to generate time series of DEMs, with which finally volumetric changes of the dome and of lava flows can be determined, as well as lava effusion rates. Here, this method is used at Shiveluch volcano based on data acquired between June 2011 and September 2014. Although Shiveluch has a fissured topography with steep slopes

  7. Mixing equilibrium in two-density fluidized beds by DEM

    NASA Astrophysics Data System (ADS)

    Di Renzo, A.; Di Maio, F. P.

    2010-05-01

    Interaction of fluid and granular flows in dense two-phase systems is responsible for the significantly different behavior of units used in the chemical industry such as fluidized beds. The momentum exchange phenomena involved during gas fluidization of a binary mixture of solids differing in density is such that the continuous mixing action of the fluid flowing upwards counteracts the natural tendency of the two (fluidized) solids to segregate with the heavier component fully settling at the bottom of the bed. In the present work the complex hydrodynamics of two-density gas-fluidized beds is studied by means of a DEM-CFD computational approach, combining the discrete element method (DEM) and a solution of the locally averaged equations of motion (CFD). The model is first validated against experimental data and then used to investigate the role of gas velocity versus density ratio of the two components in determining the distribution of the components in the system. It is shown first that a unique equilibrium composition profile is reached independent of the initial arrangements of the solids. Then, numerical simulations are used to find the equilibrium conditions of mixing/segregation as a function of the gas velocity in excess of the minimum fluidization velocity of the heavier component and as a function of the density ratio of the two solid species. A mixing map on the gas velocity-density ratio plane is finally reconstructed by plotting iso-mixing lines that shows quantitatively how conditions ranging from full mixing to fully segregated components are obtained.

  8. Explosive materials equivalency, test methods and evaluation

    NASA Technical Reports Server (NTRS)

    Koger, D. M.; Mcintyre, F. L.

    1980-01-01

    Attention is given to concepts of explosive equivalency of energetic materials based on specific airblast parameters. A description is provided of a wide bandwidth high accuracy instrumentation system which has been used extensively in obtaining pressure time profiles of energetic materials. The object of the considered test method is to determine the maximum output from the detonation of explosive materials in terms of airblast overpressure and positive impulse. The measured pressure and impulse values are compared with known characteristics of hemispherical TNT data to determine the equivalency of the test material in relation to TNT. An investigation shows that meaningful comparisons between various explosives and a standard reference material such as TNT should be based upon the same parameters. The tests should be conducted under the same conditions.

  9. Household batteries: Evaluation of collection methods

    SciTech Connect

    Seeberger, D.A.

    1992-12-31

    While it is difficult to prove that a specific material is causing contamination in a landfill, tests have been conducted at waste-to-energy facilities that indicate that household batteries contribute significant amounts of heavy metals to both air emissions and ash residue. Hennepin County, MN, used a dual approach for developing and implementing a special household battery collection. Alternative collection methods were examined; test collections were conducted. The second phase examined operating and disposal policy issues. This report describes the results of the grant project, moving from a broad examination of the construction and content of batteries, to a description of the pilot collection programs, and ending with a discussion of variables affecting the cost and operation of a comprehensive battery collection program. Three out-of-state companies (PA, NY) were found that accept spent batteries; difficulties in reclaiming household batteries are discussed.

  10. Household batteries: Evaluation of collection methods

    SciTech Connect

    Seeberger, D.A.

    1992-01-01

    While it is difficult to prove that a specific material is causing contamination in a landfill, tests have been conducted at waste-to-energy facilities that indicate that household batteries contribute significant amounts of heavy metals to both air emissions and ash residue. Hennepin County, MN, used a dual approach for developing and implementing a special household battery collection. Alternative collection methods were examined; test collections were conducted. The second phase examined operating and disposal policy issues. This report describes the results of the grant project, moving from a broad examination of the construction and content of batteries, to a description of the pilot collection programs, and ending with a discussion of variables affecting the cost and operation of a comprehensive battery collection program. Three out-of-state companies (PA, NY) were found that accept spent batteries; difficulties in reclaiming household batteries are discussed.

  11. Mixed Methods and Credibility of Evidence in Evaluation

    ERIC Educational Resources Information Center

    Mertens, Donna M.; Hesse-Biber, Sharlene

    2013-01-01

    We argue for a view of credible evidence that is multidimensional in philosophical and methodological terms. We advocate for the importance of deepening the meaning of credible evaluation practice and findings by bringing multiple philosophical and theoretical lenses to the evaluation process as a basis for the use of mixed methods in evaluation,…

  12. Integrating Qualitative and Quantitative Evaluation Methods in Substance Abuse Research.

    ERIC Educational Resources Information Center

    Dennis, Michael L.; And Others

    1994-01-01

    Some specific opportunities and techniques are described for combining and integrating qualitative and quantitative methods from the design stage of a substance abuse program evaluation through implementation and reporting. The multiple problems and requirements of such an evaluation make integrated methods essential. (SLD)

  13. EPA METHODS FOR EVALUATING WETLAND CONDITION, WETLANDS CLASSIFICATION

    EPA Science Inventory

    In 1999, the U.S. Environmental Protection Agency (EPA) began work on this series of reports entitled Methods for Evaluating Wetland Condition. The purpose of these reports is to help States and Tribes develop methods to evaluate 1) the overall ecological condition of wetlands us...

  14. Using Developmental Evaluation Methods with Communities of Practice

    ERIC Educational Resources Information Center

    van Winkelen, Christine

    2016-01-01

    Purpose: This paper aims to explore the use of developmental evaluation methods with community of practice programmes experiencing change or transition to better understand how to target support resources. Design/methodology/approach: The practical use of a number of developmental evaluation methods was explored in three organizations over a…

  15. Qualitative Methods in Family Evaluation: Creative Assessment Techniques.

    ERIC Educational Resources Information Center

    Deacon, Sharon A.; Piercy, Fred P.

    2001-01-01

    Discusses the role that experiential therapy methods can play in qualitative family assessment. It is believed that these methods can be quite helpful in engaging families in a collaborative evaluation process. The advantages of qualitative assessment are presented as a complement to more quantitative family evaluation measures. (BF)

  16. A quick algorithm of counting flow accumulation matrix for deriving drainage networks from a DEM

    NASA Astrophysics Data System (ADS)

    Wang, Yanping; Liu, Yonghe; Xie, Hongbo; Xiang, ZhongLin

    2011-06-01

    Computerized auto-extraction of drainage networks from Digital Elevation Model (DEM) has been widely used in hydrological modeling and relevant studies. Several essential procedures need to be implemented in eight-directional(D8) watershed delineation method, among which a problem need to be resolved is the lack of a high efficiency algorithm for quick and accurate computation of flow accumulation matrix involved in river network delineations. For the problem of depression filling, the algorithm presented by Oliver Planchon has resolved it. This study was aimed to develop a simple and quick algorithm for flow accumulation matrix computations. For this purpose, a simple and high efficiency algorithm of the time complexity of O(n) compared to the commonly used code of the time complexity of O(n2) orO(nlogn) , has been developed. Performance tests on this newly developed algorithm were conducted for different size of DEMs, and the results suggested that the algorithm has a linear time complexity with increasing sizes of DEM. The computation efficiency of this newly developed algorithm is many times higher than the commonly used code, and for a DEM of size 1000*1000, flow accumulation matrix computation can be completed within only several seconds compared with about few minutes needed by common used algorithms.

  17. A simplified DEM-CFD approach for pebble bed reactor simulations

    SciTech Connect

    Li, Y.; Ji, W.

    2012-07-01

    In pebble bed reactors (PBR's), the pebble flow and the coolant flow are coupled with each other through coolant-pebble interactions. Approaches with different fidelities have been proposed to simulate similar phenomena. Coupled Discrete Element Method-Computational Fluid Dynamics (DEM-CFD) approaches are widely studied and applied in these problems due to its good balance between efficiency and accuracy. In this work, based on the symmetry of the PBR geometry, a simplified 3D-DEM/2D-CFD approach is proposed to speed up the DEM-CFD simulation without significant loss of accuracy. Pebble flow is simulated by a full 3-D DEM, while the coolant flow field is calculated with a 2-D CFD simulation by averaging variables along the annular direction in the cylindrical geometry. Results show that this simplification can greatly enhance the efficiency for cylindrical core, which enables further inclusion of other physics such as thermal and neutronic effect in the multi-physics simulations for PBR's. (authors)

  18. Extracting DEM from airborne X-band data based on PolInSAR

    NASA Astrophysics Data System (ADS)

    Hou, X. X.; Huang, G. M.; Zhao, Z.

    2015-06-01

    Polarimetric Interferometric Synthetic Aperture Radar (PolInSAR) is a new trend of SAR remote sensing technology which combined polarized multichannel information and Interferometric information. It is of great significance for extracting DEM in some regions with low precision of DEM such as vegetation coverage area and building concentrated area. In this paper we describe our experiments with high-resolution X-band full Polarimetric SAR data acquired by a dual-baseline interferometric airborne SAR system over an area of Danling in southern China. Pauli algorithm is used to generate the double polarimetric interferometry data, Singular Value Decomposition (SVD), Numerical Radius (NR) and Phase diversity (PD) methods are used to generate the full polarimetric interferometry data. Then we can make use of the polarimetric interferometric information to extract DEM with processing of pre filtering , image registration, image resampling, coherence optimization, multilook processing, flat-earth removal, interferogram filtering, phase unwrapping, parameter calibration, height derivation and geo-coding. The processing system named SARPlore has been exploited based on VC++ led by Chinese Academy of Surveying and Mapping. Finally compared optimization results with the single polarimetric interferometry, it has been observed that optimization ways can reduce the interferometric noise and the phase unwrapping residuals, and improve the precision of DEM. The result of full polarimetric interferometry is better than double polarimetric interferometry. Meanwhile, in different terrain, the result of full polarimetric interferometry will have a different degree of increase.

  19. Systems and methods for circuit lifetime evaluation

    NASA Technical Reports Server (NTRS)

    Heaps, Timothy L. (Inventor); Sheldon, Douglas J. (Inventor); Bowerman, Paul N. (Inventor); Everline, Chester J. (Inventor); Shalom, Eddy (Inventor); Rasmussen, Robert D. (Inventor)

    2013-01-01

    Systems and methods for estimating the lifetime of an electrical system in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes iteratively performing Worst Case Analysis (WCA) on a system design with respect to different system lifetimes using a computer to determine the lifetime at which the worst case performance of the system indicates the system will pass with zero margin or fail within a predetermined margin for error given the environment experienced by the system during its lifetime. In addition, performing WCA on a system with respect to a specific system lifetime includes identifying subcircuits within the system, performing Extreme Value Analysis (EVA) with respect to each subcircuit to determine whether the subcircuit fails EVA for the specific system lifetime, when the subcircuit passes EVA, determining that the subcircuit does not fail WCA for the specified system lifetime, when a subcircuit fails EVA performing at least one additional WCA process that provides a tighter bound on the WCA than EVA to determine whether the subcircuit fails WCA for the specified system lifetime, determining that the system passes WCA with respect to the specific system lifetime when all subcircuits pass WCA, and determining that the system fails WCA when at least one subcircuit fails WCA.

  20. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams. PMID:27505659

  1. Validation of DEMs Derived from High Resolution SAR Data: a Case Study on Barcelona

    NASA Astrophysics Data System (ADS)

    Sefercik, U. G.; Schunert, A.; Soergel, U.; Watanabe, K.

    2012-07-01

    In recent years, Synthetic Aperture Radar (SAR) data have been widely used for scientific applications and several SAR missions were realized. The active sensor principle and the signal wavelength in the order of centimeters provide all-day and all-weather capabilities, respectively. The modern German TerraSAR-X (TSX) satellite provides high spatial resolution down to one meter. Based on such data SAR Interferometry may yield high quality digital surface models (DSMs), which includes points located on 3d objects such as vegetation, forest, and elevated man-made structures. By removing these points, digital elevation model (DEM) representing the bare ground of Earth is obtained. The primary objective of this paper is the validation of DEMs obtained from TSX SAR data covering Barcelona area, Spain, in the framework of a scientific project conducted by ISPRS Working Group VII/2 "SAR Interferometry" that aims the evaluation of DEM derived from data of modern SAR satellite sensors. Towards this purpose, a DSM was generated with 10 m grid spacing using TSX StripMap mode SAR data and converted to a DEM by filtering. The accuracy results have been presented referring the comparison with a more accurate (10 cm-1 m) digital terrain model (DTM) derived from large scale photogrammetry. The results showed that the TSX DEM is quite coherent with the topography and the accuracy is in between ±8-10 m. As another application, the persistent scatterer interferometry (PSI) was conducted using TSX data and the outcomes were compared with a 3d city model available in Google Earth, which is known to be very precise because it is based on LIDAR data. The results showed that PSI outcomes are quite coherent with reference data and the RMSZ of differences is around 2.5 m.

  2. Direct toxicity assessment - Methods, evaluation, interpretation.

    PubMed

    Gruiz, Katalin; Fekete-Kertész, Ildikó; Kunglné-Nagy, Zsuzsanna; Hajdu, Csilla; Feigl, Viktória; Vaszita, Emese; Molnár, Mónika

    2016-09-01

    Direct toxicity assessment (DTA) results provide the scale of the actual adverse effect of contaminated environmental samples. DTA results are used in environmental risk management of contaminated water, soil and waste, without explicitly translating the results into chemical concentration. The end points are the same as in environmental toxicology in general, i.e. inhibition rate, decrease in the growth rate or in yield and the 'no effect' or the 'lowest effect' measurement points of the sample dilution-response curve. The measurement unit cannot be a concentration, since the contaminants and their content in the sample is unknown. Thus toxicity is expressed as the sample proportion causing a certain scale of inhibition or no inhibition. Another option for characterizing the scale of toxicity of an environmental sample is equivalencing. Toxicity equivalencing represents an interpretation tool which enables toxicity of unknown mixtures of chemicals be converted into the concentration of an equivalently toxic reference substance. Toxicity equivalencing, (i.e. expressing the toxicity of unknown contaminants as the concentration of the reference) makes DTA results better understandable for non-ecotoxicologists and other professionals educated and thinking based on the chemical model. This paper describes and discusses the role, the principles, the methodology and the interpretation of direct toxicity assessment (DTA) with the aim to contribute to the understanding of the necessity to integrate DTA results into environmental management of contaminated soil and water. The paper also introduces the benefits of the toxicity equivalency method. The use of DTA is illustrated through two case studies. The first case study focuses on DTA of treated wastewater with the aim to characterize the treatment efficacy of a biological wastewater treatment plant by frequent bioassaying. The second case study applied DTA to investigate the cover layers of two bauxite residue (red mud

  3. Stream Morphologic Measurements from Airborne Laser Swath Mapping: Comparisons with Field Surveys, Traditional DEMs, and Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Snyder, N. P.; Schultz, L. L.

    2005-12-01

    Precise measurement of stream morphology over entire watersheds is one of the great research opportunities provided by airborne laser swath mapping (ALSM). ALSM surveys allow for rapid quantification of factors, such as channel width and gradient, that control stream hydraulic and ecologic properties. We compare measurements from digital elevation models (DEMs) derived from ALSM data collected by the National Center for Airborne Laser Mapping (NCALM) to field surveys, traditional DEMs (rasterized from topographic maps), and aerial photographs. The field site is in the northern Black Mountains in arid Death Valley National Park (California). The area is unvegetated, and therefore is excellent for testing DEM analysis methods because the ALSM data required minimal filtering, and the resulting DEM contains relatively few unphysical sinks. Algorithms contained in geographic information systems (GIS) software used to extract stream networks from DEMs yield best results where streams are steep enough for resolvable pixel-to-pixel elevation change, and channel width is on the order of pixel resolution. This presents a new challenge with ALSM-derived DEMs because the pixel size (1 m) is often an order of magnitude or more smaller than channel width. We find the longitudinal profile of Gower Gulch in the northern Black Mountains (~4 km total length) extracted using the ALSM DEM and a flow accumulation algorithm is 14% longer than a traditional 10-m DEM, and 13% longer than a field survey. These differences in length (and therefore gradient) are due to the computed channel path following small-scale topographic variations within the channel bottom that are not relevant during high flows. However, visual analysis of shaded-relief images created from high-resolution ALSM data is an excellent method for digitizing channel banks and thalweg paths. We used these lines to measure distance, elevation, and width. In Gower Gulch, the algorithm-derived profile is 10% longer than that

  4. Deep Dive: Evaluation Methods for Electronic Health Records.

    PubMed

    Collins, Sarah

    2016-01-01

    Clinicians currently use electronic health records (EHR) which have often not been designed with the user in mind. Participatory design requires a thorough evaluation of the system using mixed methods. When different methods yield conflicting results, synthesis is challenging. This panel will present four cases of triangulation approaches to evaluate EHR usability and usage in multiple institutions. The audience will have a better idea how to triangulate results from multiple innovative methods such as the use of eye-tracking techniques and mixed methods approaches to evaluation. PMID:27332332

  5. Aspects of dem Generation from Uas Imagery

    NASA Astrophysics Data System (ADS)

    Greiwe, A.; Gehrke, R.; Spreckels, V.; Schlienkamp, A.

    2013-08-01

    Since a few years, micro UAS (unmanned aerial systems) with vertical take off and landing capabilities like quadro- or octocopter are used as sensor platform for Aerophotogrammetry. Since the restricted payload of micro UAS with a total weight up of 5 kg (payload only up to 1.5 kg), these systems are often equipped with small format cameras. These cameras can be classified as amateur cameras and it is often the case, that these systems do not meet the requirements of a geometric stable camera for photogrammetric measurement purposes. However, once equipped with a suitable camera system, an UAS is an interesting alternative to expensive manned flights for small areas. The operating flight height of the above described UAS is about 50 up to 150 meters above ground level. This low flight height lead on the one hand to a very high spatial resolution of the aerial imagery. Depending on the cameras focal length and the sensor's pixel size, the ground sampling distance (GSD) is usually about 1 up to 5 cm. This high resolution is useful especially for the automatic generation of homologous tie-points, which are a precondition for the image alignment (bundle block adjustment). On the other hand, the image scale depends on the object's height and the UAV operating height. Objects like mine heaps or construction sites show high variations of the object's height. As a result, operating the UAS with a constant flying height will lead to high variations in the image scale. For some processing approaches this will lead to problems e.g. the automatic tie-point generation in stereo image pairs. As precondition to all DEM generating approaches, first of all a geometric stable camera, sharp images are essentially. Well known calibration parameters are necessary for the bundle adjustment, to control the exterior orientations. It can be shown, that a simultaneous on site camera calibration may lead to misaligned aerial images. Also, the success rate of an automatic tie-point generation

  6. Evaluating Methods for Evaluating Instruction: The Case of Higher Education. NBER Working Paper No. 12844

    ERIC Educational Resources Information Center

    Weinberg, Bruce A.; Fleisher, Belton M.; Hashimoto, Masanori

    2007-01-01

    This paper studies methods for evaluating instruction in higher education. We explore student evaluations of instruction and a variety of alternatives. We develop a simple model to illustrate the biases inherent in student evaluations. Measuring learning using grades in future courses, we show that student evaluations are positively related to…

  7. EarthEnv-DEM90: A nearly-global, void-free, multi-scale smoothed, 90m digital elevation model from fused ASTER and SRTM data

    NASA Astrophysics Data System (ADS)

    Robinson, Natalie; Regetz, James; Guralnick, Robert P.

    2014-01-01

    A variety of DEM products are available to the public at no cost, though all are characterized by trade-offs in spatial coverage, data resolution, and quality. The absence of a high-resolution, high-quality, well-described and vetted, free, global consensus product was the impetus for the creation of a new DEM product described here, 'EarthEnv-DEM90'. This new DEM is a compilation dataset constructed via rigorous techniques by which ASTER GDEM2 and CGIAR-CSI v4.1 products were fused into a quality-enhanced, consistent grid of elevation estimates that spans ∼91% of the globe. EarthEnv-DEM90 was assembled using methods for seamlessly merging input datasets, thoroughly filling voids, and smoothing data irregularities (e.g. those caused by DEM noise) from the approximated surface. The result is a DEM product in which elevational artifacts are strongly mitigated from the input data fusion zone, substantial voids are filled in the northern-most regions of the globe, and the entire DEM exhibits reduced terrain noise. As important as the final product is a well defined methodology, along with new processing techniques and careful attention to final outputs, that extends the value and usability of the work beyond just this single product. Finally, we outline EarthEnv-DEM90 acquisition instructions and metadata availability, so that researchers can obtain this high-resolution, high-quality, nearly-global new DEM product for the study of wide-ranging global phenomena.

  8. Reanalysis of the DEMS Nested Case-Control Study of Lung Cancer and Diesel Exhaust: Suitability for Quantitative Risk Assessment

    PubMed Central

    Crump, Kenny S; Van Landingham, Cynthia; Moolgavkar, Suresh H; McClellan, Roger

    2015-01-01

    The International Agency for Research on Cancer (IARC) in 2012 upgraded its hazard characterization of diesel engine exhaust (DEE) to “carcinogenic to humans.” The Diesel Exhaust in Miners Study (DEMS) cohort and nested case-control studies of lung cancer mortality in eight U.S. nonmetal mines were influential in IARC’s determination. We conducted a reanalysis of the DEMS case-control data to evaluate its suitability for quantitative risk assessment (QRA). Our reanalysis used conditional logistic regression and adjusted for cigarette smoking in a manner similar to the original DEMS analysis. However, we included additional estimates of DEE exposure and adjustment for radon exposure. In addition to applying three DEE exposure estimates developed by DEMS, we applied six alternative estimates. Without adjusting for radon, our results were similar to those in the original DEMS analysis: all but one of the nine DEE exposure estimates showed evidence of an association between DEE exposure and lung cancer mortality, with trend slopes differing only by about a factor of two. When exposure to radon was adjusted, the evidence for a DEE effect was greatly diminished, but was still present in some analyses that utilized the three original DEMS DEE exposure estimates. A DEE effect was not observed when the six alternative DEE exposure estimates were utilized and radon was adjusted. No consistent evidence of a DEE effect was found among miners who worked only underground. This article highlights some issues that should be addressed in any use of the DEMS data in developing a QRA for DEE. PMID:25857246

  9. Handbook of test methods for evaluating chemical deicers

    SciTech Connect

    Chappelow, C.C.; McElroy, A.D.; Blackburn, R.R.; Darwin, D.; de Noyelles, F.G.

    1992-11-01

    The handbook contains a structured selection of specific test methods for complete characterization of deicing chemicals. Sixty-two specific test methods are defined for the evaluation of chemical deicers in eight principal property performance areas: (1) physicochemical characteristics; (2) deicing performance; (3) compatibility with bare and coated metals; (4) compatibility with metals in concrete; (5) compatibility with concrete and nonmetals; (6) engineering parameters; (7) ecological effects; and (8) health and safety aspects. The 62 specific chemical deicer test methods are composed of 12 primary and 50 supplementary test methods. The primary test methods, which were developed for conducting the more important evaluations, are identified in the report.

  10. A Preliminary Rubric Design to Evaluate Mixed Methods Research

    ERIC Educational Resources Information Center

    Burrows, Timothy J.

    2013-01-01

    With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…

  11. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    NASA Astrophysics Data System (ADS)

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-01

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. The thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.

  12. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  13. Fusion of space-borne multi-baseline and multi-frequency interferometric results based on extended Kalman filter to generate high quality DEMs

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojie; Zeng, Qiming; Jiao, Jian; Zhang, Jingfa

    2016-01-01

    Repeat-pass Interferometric Synthetic Aperture Radar (InSAR) is a technique that can be used to generate DEMs. But the accuracy of InSAR is greatly limited by geometrical distortions, atmospheric effect, and decorrelations, particularly in mountainous areas, such as western China where no high quality DEM has so far been accomplished. Since each of InSAR DEMs generated using data of different frequencies and baselines has their own advantages and disadvantages, it is therefore very potential to overcome some of the limitations of InSAR by fusing Multi-baseline and Multi-frequency Interferometric Results (MMIRs). This paper proposed a fusion method based on Extended Kalman Filter (EKF), which takes the InSAR-derived DEMs as states in prediction step and the flattened interferograms as observations in control step to generate the final fused DEM. Before the fusion, detection of layover and shadow regions, low-coherence regions and regions with large height error is carried out because MMIRs in these regions are believed to be unreliable and thereafter are excluded. The whole processing flow is tested with TerraSAR-X and Envisat ASAR datasets. Finally, the fused DEM is validated with ASTER GDEM and national standard DEM of China. The results demonstrate that the proposed method is effective even in low coherence areas.

  14. Entrepreneur environment management behavior evaluation method derived from environmental economy.

    PubMed

    Zhang, Lili; Hou, Xilin; Xi, Fengru

    2013-12-01

    Evaluation system can encourage and guide entrepreneurs, and impel them to perform well in environment management. An evaluation method based on advantage structure is established. It is used to analyze entrepreneur environment management behavior in China. Entrepreneur environment management behavior evaluation index system is constructed based on empirical research. Evaluation method of entrepreneurs is put forward, from the point of objective programming-theory to alert entrepreneurs concerned to think much of it, which means to take minimized objective function as comprehensive evaluation result and identify disadvantage structure pattern. Application research shows that overall behavior of Chinese entrepreneurs environmental management are good, specially, environment strategic behavior are best, environmental management behavior are second, cultural behavior ranks last. Application results show the efficiency and feasibility of this method. PMID:25078816

  15. Evaluation of two gas-dilution methods for instrument calibration

    NASA Technical Reports Server (NTRS)

    Evans, A., Jr.

    1977-01-01

    Two gas dilution methods were evaluated for use in the calibration of analytical instruments used in air pollution studies. A dual isotope fluorescence carbon monoxide analyzer was used as the transfer standard. The methods are not new but some modifications are described. The rotary injection gas dilution method was found to be more accurate than the closed loop method. Results by the two methods differed by 5 percent. This could not be accounted for by the random errors in the measurements. The methods avoid the problems associated with pressurized cylinders. Both methods have merit and have found a place in instrument calibration work.

  16. High-resolution DEM Effects on Geophysical Flow Models

    NASA Astrophysics Data System (ADS)

    Williams, M. R.; Bursik, M. I.; Stefanescu, R. E. R.; Patra, A. K.

    2014-12-01

    Geophysical mass flow models are numerical models that approximate pyroclastic flow events and can be used to assess the volcanic hazards certain areas may face. One such model, TITAN2D, approximates granular-flow physics based on a depth-averaged analytical model using inputs of basal and internal friction, material volume at a coordinate point, and a GIS in the form of a digital elevation model (DEM). The volume of modeled material propagates over the DEM in a way that is governed by the slope and curvature of the DEM surface and the basal and internal friction angles. Results from TITAN2D are highly dependent upon the inputs to the model. Here we focus on a single input: the DEM, which can vary in resolution. High resolution DEMs are advantageous in that they contain more surface details than lower-resolution models, presumably allowing modeled flows to propagate in a way more true to the real surface. However, very high resolution DEMs can create undesirable artifacts in the slope and curvature that corrupt flow calculations. With high-resolution DEMs becoming more widely available and preferable for use, determining the point at which high resolution data is less advantageous compared to lower resolution data becomes important. We find that in cases of high resolution, integer-valued DEMs, very high-resolution is detrimental to good model outputs when moderate-to-low (<10-15°) slope angles are involved. At these slope angles, multiple adjacent DEM cell elevation values are equal due to the need for the DEM to approximate the low slope with a limited set of integer values for elevation. The first derivative of the elevation surface thus becomes zero. In these cases, flow propagation is inhibited by these spurious zero-slope conditions. Here we present evidence for this "terracing effect" from 1) a mathematically defined simulated elevation model, to demonstrate the terracing effects of integer valued data, and 2) a real-world DEM where terracing must be

  17. Development of high-resolution coastal DEMs: Seamlessly integrating bathymetric and topographic data to support coastal inundation modeling

    NASA Astrophysics Data System (ADS)

    Eakins, B. W.; Taylor, L. A.; Warnken, R. R.; Carignan, K. S.; Sharman, G. F.

    2006-12-01

    The National Geophysical Data Center (NGDC), an office of the National Oceanic and Atmospheric Administration (NOAA), is cooperating with the NOAA Pacific Marine Environmental Laboratory (PMEL), Center for Tsunami Research to develop high-resolution digital elevation models (DEMs) of combined bathymetry and topography. The coastal DEMs will be used as input for the Method of Splitting Tsunami (MOST) model developed by PMEL to simulate tsunami generation, propagation and inundation. The DEMs will also be useful in studies of coastal inundation caused by hurricane storm surge and rainfall flooding, resulting in valuable information for local planners involved in disaster preparedness. We present our methodology for creating the high-resolution coastal DEMs, typically at 1/3 arc-second (10 meters) cell size, from diverse digital datasets collected by numerous methods, in different terrestrial environments, and at various scales and resolutions; one important step is establishing the relationships between various tidal and geodetic vertical datums, which may vary over a gridding region. We also discuss problems encountered and lessons learned, using the Myrtle Beach, South Carolina DEM as an example.

  18. Precise Baseline Determination for the TanDEM-X Mission

    NASA Astrophysics Data System (ADS)

    Moon, Y.; Koenig, R.; Wermuth, M.; Montenbruck, O.; Jaeggi, A.

    2011-12-01

    The principal goal of the TanDEM-X mission is the generation of a global Digital Elevation Model (DEM) with 2 meters relative vertical accuracy. To achieve this requirement, the relative trajectory between TerraSAR-X and TanDEM-X, called baseline, should be determined with an accuracy of 1 millimeter. For this purpose, the German Research Centre for Geosciences (GFZ) has provided the Tracking, Occultation and Ranging (TOR) payload for both TerraSAR-X and TanDEM-X. Using the geodetic grade GPS data from the TOR instruments installed on both satellites, GFZ has been providing operationally TanDEM-X baseline products since the launch of the TanDEM-X in June 2011. In this contribution, an overview of the TanDEM-X project, the role of the baseline and its operational provision from three different software solutions within the ground segment and future prospects are given. The quality of the different baseline products will be assessed using one-year of operationally generated baseline products from GFZ and DLR. Two baseline solutions from the EPOS and BERNESE software packages by GFZ and one solution from the GHOST/FRNS software package by DLR are compared in terms of standard deviation and mean of the differences. The long-term series provides a focus on the bias track between the baseline solutions. Then the topic of calibrating the bias of the baselines via SAR data taken over test areas is discussed. In a final step, the different baseline solutions are corrected for their bias and merged for noise reduction into an optimal baseline being input to the operational DEM production.

  19. DEM Uncertainty propagation in second derivatives geomorphometrical maps

    NASA Astrophysics Data System (ADS)

    Cosmin Sandric, Ionut; Ursaru, Petre

    2013-04-01

    In order to model the uncertainty from DEM a special model was created and implemented as Python script in ArcGIS Desktop using the ArcPy SDK provided by ESRI. The model is based on Monte Carlo simulation for generating noise and Map Algebra for adding the noise to DEM. The model can be used and independent script or combined with any other models. The inputs of the model are a DEM and an estimation of the DEM accuracy expressed as mean and standard deviation of the errors. The mean and standard deviation may be obtained from a crossvalidation/validation operation, if the model is obtained with geostatistics or by a simple validation with ground control points, if the model is obtained by other means than geostatistics. The DEM uncertainty propagation model assumes that the errors are normally distributed and thus the noise is normal distributed. This version of the model requires a Spatial Analyst extension, but the future versions may be used without or with Spatial Analyst extension. The main issue related with the addition of noise to DEM's in order to compensate for uncertainty is that the second derivatives are almost impossible to extract. This drawback was overcome by using and interpolated noisy surface in the uncertainty propagation model. Statistical analysis on raster obtained in each Monte Carlo simulation; for each realization of the model the following statistical analysis are performed: mean, minimum, maximum, range and standard deviation are extracted and saved as ESRI GRID format When the model finishes the specialist have an image about the uncertainties that might be contained by the DEM and in the same time have a collection of DEM that can be used to generate first and second order derivatives

  20. EVALUATING AND OPTIMIZING ELECTRON MICROSCOPE METHODS FOR CHARACTERIZING AIRBORNE ASBESTOS

    EPA Science Inventory

    Evaluation of EM methods for measuring airborne asbestos fiber concentrations and size distributions was carried out by studying a large number of variables and subprocedures in a five-phase program using elaborate statistically designed experiments. Observations were analyzed by...

  1. FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining pentachlorophenol (PCP) contamination in soil and wa...

  2. FIELD ANALYTICAL SCREENING PROGRAM PCB METHOD: INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...

  3. EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD

    EPA Science Inventory

    Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...

  4. ASTM test methods for composite characterization and evaluation

    NASA Technical Reports Server (NTRS)

    Masters, John E.

    1994-01-01

    A discussion of the American Society for Testing and Materials is given. Under the topic of composite materials characterization and evaluation, general industry practice and test methods for textile composites are presented.

  5. Critiquing as a method of evaluation in the classroom.

    PubMed

    Goldenberg, D

    1994-01-01

    The criticism model of evaluation in a humanistic educative paradigm is a shared teacher/student evaluative activity based on a trusting relationship in which students become connoisseur critics. Among the standards for critiquing are criteria for student/teacher interactions and criteria for learning activities. The author provides examples of student/teacher critiquing activities employed in a graduate nursing course and suggests ways for faculty to develop skills in the critique method of evaluation. PMID:7862291

  6. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Phase 1

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley Multidisciplinary Design Optimization (MDO) method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of reproducible experiments. This report documents all computational experiments conducted in Phase I of the study. This report is a companion to the paper titled Initial Results of an MDO Method Evaluation Study by N. M. Alexandrov and S. Kodiyalam (AIAA-98-4884).

  7. Methods to include foreign information in national evaluations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic evaluations (GEBV) with higher reliability often result from including genotypes and phenotypes from foreign bulls in the reference population. Multi-step methods evaluate domestic phenotypes first using only pedigree relationships (EBV), then add foreign data available from multi-trait acro...

  8. Evaluation of methods of temperament scoring for beef cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Temperament can negatively affect various production traits, including live weight, ADG, DMI, conception rates and carcass weight. The objective of this research study was to evaluate temperament scoring methods in beef cattle. Crossbred (n = 228) calves were evaluated for temperament at weaning by ...

  9. Differences among methods to validate genomic evaluations for dairy cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two methods of testing predictions from genomic evaluations were investigated. Data used were from the April 2010 and August 2006 official USDA genetic evaluations of dairy cattle. The training data set consisted of both cows and bulls that were proven (had own or daughter information) as of Augus...

  10. A formative multi-method approach to evaluating training.

    PubMed

    Hayes, Holly; Scott, Victoria; Abraczinskas, Michelle; Scaccia, Jonathan; Stout, Soma; Wandersman, Abraham

    2016-10-01

    This article describes how we used a formative multi-method evaluation approach to gather real-time information about the processes of a complex, multi-day training with 24 community coalitions in the United States. The evaluation team used seven distinct, evaluation strategies to obtain evaluation data from the first Community Health Improvement Leadership Academy (CHILA) within a three-prong framework (inquiry, observation, and reflection). These methods included: comprehensive survey, rapid feedback form, learning wall, observational form, team debrief, social network analysis and critical moments reflection. The seven distinct methods allowed for both real time quality improvement during the CHILA and long term planning for the next CHILA. The methods also gave a comprehensive picture of the CHILA, which when synthesized allowed the evaluation team to assess the effectiveness of a training designed to tap into natural community strengths and accelerate health improvement. We hope that these formative evaluation methods can continue to be refined and used by others to evaluate training. PMID:27454882

  11. SPOT 5 HRS geometric performances: Using block adjustment as a key issue to improve quality of DEM generation

    NASA Astrophysics Data System (ADS)

    Bouillon, Aurélie; Bernard, Marc; Gigord, Patrick; Orsoni, Alain; Rudowski, Véronique; Baudoin, Alain

    HRS instrument on board of SPOT 5 allows acquisition of stereoscopic pairs in a single pass. Its final purpose is to provide the worldwide database of Digital Terrain Models and Orthoimages, called Reference3D®, with no use of ground control points and a horizontal location specified as 16 m for 90% of the points. The stake is not only to fulfil the HRS' location specification of 50 m in root mean square but to obtain a location performance as best as possible and to check that the instrument can cope with the realisation of the Reference3D® database. That's why HRS has been the subject of specific attention even after the end of SPOT 5 in-flight commissioning phase. This comprehensive paper summarizes the steps jointly taken by CNES, IGN, Spot Image and HRS end-users to calibrate, monitor and validate the HRS image accuracy as well as the accuracy of the final DEM products. First, an overview of HRS on SPOT 5 will be given, with a specific attention paid to the instrument geometry and rigorous sensor model available through ancillary data given with the images. Then, methods and means used by the French Space Agency (CNES) and the French Mapping Agency (IGN) to reach the objectives will be described and the final accuracy reached will be given. It concerns: calibration and monitoring of the HRS geometrical parameters, and the several on board improvements brought to the satellite by CNES to improve the location accuracy at the single pair level (i.e. no tie points nor ground control points used); the HRS block adjustment model calibration and the evaluation performed by IGN at the stereo pair level (i.e with tie points used within the pair) and block level (with tie points used between different HRS strips). Finally, the article deals with several comparisons of HRS DEM's against available DEMs (eg. SRTM DTED level 2) and also reviews various evaluations of the HRS and the Reference3D® products made by independent users against ground truth issued from field

  12. Development of an automatic evaluation method for patient positioning error.

    PubMed

    Kubota, Yoshiki; Tashiro, Mutsumi; Shinohara, Ayaka; Abe, Satoshi; Souda, Saki; Okada, Ryosuke; Ishii, Takayoshi; Kanai, Tatsuaki; Ohno, Tatsuya; Nakano, Takashi

    2015-01-01

    Highly accurate radiotherapy needs highly accurate patient positioning. At our facility, patient positioning is manually performed by radiology technicians. After the positioning, positioning error is measured by manually comparing some positions on a digital radiography image (DR) to the corresponding positions on a digitally reconstructed radiography image (DRR). This method is prone to error and can be time-consuming because of its manual nature. Therefore, we propose an automated measuring method for positioning error to improve patient throughput and achieve higher reliability. The error between a position on the DR and a position on the DRR was calculated to determine the best matched position using the block-matching method. The zero-mean normalized cross correlation was used as our evaluation function, and the Gaussian weight function was used to increase importance as the pixel position approached the isocenter. The accuracy of the calculation method was evaluated using pelvic phantom images, and the method's effectiveness was evaluated on images of prostate cancer patients before the positioning, comparing them with the results of radiology technicians' measurements. The root mean square error (RMSE) of the calculation method for the pelvic phantom was 0.23 ± 0.05 mm. The coefficients between the calculation method and the measurement results of the technicians were 0.989 for the phantom images and 0.980 for the patient images. The RMSE of the total evaluation results of positioning for prostate cancer patients using the calculation method was 0.32 ± 0.18 mm. Using the proposed method, we successfully measured residual positioning errors. The accuracy and effectiveness of the method was evaluated for pelvic phantom images and images of prostate cancer patients. In the future, positioning for cancer patients at other sites will be evaluated using the calculation method. Consequently, we expect an improvement in treatment throughput for these other sites

  13. DEM modelling of the penetration process of the HP3 Mole

    NASA Astrophysics Data System (ADS)

    Poganski, J.; Kargl, G.; Schweiger, H.; Kömle, N.

    2015-10-01

    The NASA InSight Mission will be launched in March 2016 and will reach the surface of Mars roughly nine months later in the Elysium Region. One of the instruments on board is the HP³ Mole to measure the planetary heat flow. For this purpose it needs to penetrate five meters deep into the surface of Mars and thus offers also the possibility to analyse the soil properties. For the reconstruction of the soil behaviour and also to predict the mole performance and maximum reachable depth in advance, numerical simulations are used. The simulation of the soil during the hammering process of the HP³ Mole requires a substantial numerical effort due to the local high dynamics and large soil deformations that occur. After comparing the capability of various simulation methods (FEM, MPM and DEM) a discrete element method (DEM) was chosen.

  14. Mapping from ASTER stereo image data: DEM validation and accuracy assessment

    NASA Astrophysics Data System (ADS)

    Hirano, Akira; Welch, Roy; Lang, Harold

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on-board the National Aeronautics and Space Administration's (NASA's) Terra spacecraft provides along-track digital stereo image data at 15-m resolution. As part of ASTER digital elevation model (DEM) accuracy evaluation efforts by the US/Japan ASTER Science Team, stereo image data for four study sites around the world have been employed to validate prelaunch estimates of heighting accuracy. Automated stereocorrelation procedures were implemented using the Desktop Mapping System (DMS) software on a personal computer to derive DEMs with 30- to 150-m postings. Results indicate that a root-mean-square error (RMSE) in elevation between ±7 and ±15 m can be achieved with ASTER stereo image data of good quality. An evaluation of an ASTER DEM data product produced at the US Geological Survey (USGS) EROS Data Center (EDC) yielded an RMSE of ±8.6 m. Overall, the ability to extract elevations from ASTER stereopairs using stereocorrelation techniques meets expectations.

  15. Assessing and Evaluating Multidisciplinary Translational Teams: A Mixed Methods Approach

    PubMed Central

    Wooten, Kevin C.; Rose, Robert M.; Ostir, Glenn V.; Calhoun, William J.; Ameredes, Bill T.; Brasier, Allan R.

    2014-01-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team type taxonomy. Based on team maturation and scientific progress, teams were designated as: a) early in development, b) traditional, c) process focused, or d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored. PMID:24064432

  16. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    NASA Astrophysics Data System (ADS)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  17. Development of an unresolved CFD-DEM model for the flow of viscous suspensions and its application to solid-liquid mixing

    NASA Astrophysics Data System (ADS)

    Blais, Bruno; Lassaigne, Manon; Goniva, Christoph; Fradette, Louis; Bertrand, François

    2016-08-01

    Although viscous solid-liquid mixing plays a key role in the industry, the vast majority of the literature on the mixing of suspensions is centered around the turbulent regime of operation. However, the laminar and transitional regimes face considerable challenges. In particular, it is important to know the minimum impeller speed (Njs) that guarantees the suspension of all particles. In addition, local information on the flow patterns is necessary to evaluate the quality of mixing and identify the presence of dead zones. Multiphase computational fluid dynamics (CFD) is a powerful tool that can be used to gain insight into local and macroscopic properties of mixing processes. Among the variety of numerical models available in the literature, which are reviewed in this work, unresolved CFD-DEM, which combines CFD for the fluid phase with the discrete element method (DEM) for the solid particles, is an interesting approach due to its accurate prediction of the granular dynamics and its capability to simulate large amounts of particles. In this work, the unresolved CFD-DEM method is extended to viscous solid-liquid flows. Different solid-liquid momentum coupling strategies, along with their stability criteria, are investigated and their accuracies are compared. Furthermore, it is shown that an additional sub-grid viscosity model is necessary to ensure the correct rheology of the suspensions. The proposed model is used to study solid-liquid mixing in a stirred tank equipped with a pitched blade turbine. It is validated qualitatively by comparing the particle distribution against experimental observations, and quantitatively by compairing the fraction of suspended solids with results obtained via the pressure gauge technique.

  18. Approach to evaluating leak detection methods in underground storage tanks

    NASA Astrophysics Data System (ADS)

    Starr, J.; Broscious, J.; Niaki, S.

    1986-10-01

    The detection and evaluation of leaks in underground storage tanks require a detailed knowledge of conditions both within the tank and in the nearby surroundings. The test apparatus, as constructed, enables data regarding these environmental conditions to be readily obtained and incorporated in a carefully structured test program that minimizes the amount of costly full-scale testing that would otherwise be required to evaluate volumetric leak detection methods for underground storage tanks. In addition, sufficient flexibility has been designed into the apparatus to enable additional evaluations of non-volumetric test methods to be conducted, and different types of tanks and products to be tested in a cost-effective manner.

  19. Evaluation of Stress Loaded Steel Samples Using Selected Electromagnetic Methods

    SciTech Connect

    Chady, T.

    2004-02-26

    In this paper the magnetic leakage flux and eddy current method were used to evaluate changes of materials' properties caused by stress. Seven samples made of ferromagnetic material with different level of applied stress were prepared. First, the leakage magnetic fields were measured by scanning the surface of the specimens with GMR gradiometer. Next, the same samples were evaluated using an eddy current sensor. A comparison between results obtained from both methods was carried out. Finally, selected parameters of the measured signal were calculated and utilized to evaluate level of the applied stress. A strong coincidence between amount of the applied stress and the maximum amplitude of the derivative was confirmed.

  20. A KARAOKE System Singing Evaluation Method that More Closely Matches Human Evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo

    KARAOKE is a popular amusement for old and young. Many KARAOKE machines have singing evaluation function. However, it is often said that the scores given by KARAOKE machines do not match human evaluation. In this paper a KARAOKE scoring method strongly correlated with human evaluation is proposed. This paper proposes a way to evaluate songs based on the distance between singing pitch and musical scale, employing a vibrato extraction method based on template matching of spectrum. The results show that correlation coefficients between scores given by the proposed system and human evaluation are -0.76∼-0.89.

  1. Automatic Delineation of Sea-Cliff Limits Using Lidar-Derived High-Resolution DEMs in Southern California

    NASA Astrophysics Data System (ADS)

    Palaseanu, M.; Danielson, J.; Foxgrover, A. C.; Barnard, P.; Thatcher, C.; Brock, J. C.

    2014-12-01

    Seacliff erosion is a serious hazard with implications for coastal management, and is often estimated using successive hand digitized cliff tops or bases (toe) to assess cliff retreat. Traditionally the recession of the cliff top or cliff base is obtained from aerial photographs, topographic maps, or in situ surveys. Irrespective of how or what is measured to categorize cliff erosion, the position of the cliff top and cliff base is important. Habitually, the cliff top and base are hand digitized even when using high resolution lidar derived DEMs. Even if efforts were made to standardize and eliminate as much as possible any digitizing subjectivity, the delineation of cliffs is time consuming, and depends on the analyst's interpretation. We propose an automatic procedure to delineate the cliff top and base from high resolution bare-earth DEMs. The method is based on bare-earth high-resolution DEMs, generalized coastal shorelines and approximate measurements of distance between the shoreline and the cliff top. The method generates orthogonal transects and profiles with a minimum spacing equal to the DEM resolution and extracts for each profile xyz coordinates for cliff's top and toe, as well as second major positive and negative inflections (second top and toe) along the profile. The difference between the automated and digitized top and toe, respectively, is smaller than the DEM error margin for over 82% of the top points and 86% of the toe points along a stretch of coast in Del Mar, CA. The larger errors were due either to the failure to remove all vegetation from the bare-earth DEM or errors of interpretation during hand digitizing. The automatic method was further applied between Point Conception and Los Angeles Harbor, CA. This automatic method is repeatable, takes advantage of the bare-earth high-resolution, and is more efficient.

  2. Best Estimate Method vs Evaluation Method: a comparison of two techniques in evaluating seismic analysis and design

    SciTech Connect

    Bumpus, S.E.; Johnson, J.J.; Smith, P.D.

    1980-05-01

    The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the traditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC) - seismic input, soil-structure interaction, major structural response, and subsystem response - are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on a model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evaluation Method is also demonstrated.

  3. Infrared image quality evaluation method without reference image

    NASA Astrophysics Data System (ADS)

    Yue, Song; Ren, Tingting; Wang, Chengsheng; Lei, Bo; Zhang, Zhijie

    2013-09-01

    Since infrared image quality depends on many factors such as optical performance and electrical noise of thermal imager, image quality evaluation becomes an important issue which can conduce to both image processing afterward and capability improving of thermal imager. There are two ways of infrared image quality evaluation, with or without reference image. For real-time thermal image, the method without reference image is preferred because it is difficult to get a standard image. Although there are various kinds of methods for evaluation, there is no general metric for image quality evaluation. This paper introduces a novel method to evaluate infrared image without reference image from five aspects: noise, clarity, information volume and levels, information in frequency domain and the capability of automatic target recognition. Generally, the basic image quality is obtained from the first four aspects, and the quality of target is acquired from the last aspect. The proposed method is tested on several infrared images captured by different thermal imagers. Calculate the indicators and compare with human vision results. The evaluation shows that this method successfully describes the characteristics of infrared image and the result is consistent with human vision system.

  4. Comparison of elevation derived from insar data with dem from topography map in Son Dong, Bac Giang, Viet Nam

    NASA Astrophysics Data System (ADS)

    Nguyen, Duy

    2012-07-01

    Digital Elevation Models (DEMs) are used in many applications in the context of earth sciences such as in topographic mapping, environmental modeling, rainfall-runoff studies, landslide hazard zonation, seismic source modeling, etc. During the last years multitude of scientific applications of Synthetic Aperture Radar Interferometry (InSAR) techniques have evolved. It has been shown that InSAR is an established technique of generating high quality DEMs from space borne and airborne data, and that it has advantages over other methods for the generation of large area DEM. However, the processing of InSAR data is still a challenging task. This paper describes InSAR operational steps and processing chain for DEM generation from Single Look Complex (SLC) SAR data and compare a satellite SAR estimate of surface elevation with a digital elevation model (DEM) from Topography map. The operational steps are performed in three major stages: Data Search, Data Processing, and product Validation. The Data processing stage is further divided into five steps of Data Pre-Processing, Co-registration, Interferogram generation, Phase unwrapping, and Geocoding. The Data processing steps have been tested with ERS 1/2 data using Delft Object-oriented Interferometric (DORIS) InSAR processing software. Results of the outcome of the application of the described processing steps to real data set are presented.

  5. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  6. Methods for the evaluation of alternative disaster warning systems

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.; Anderson, R. J., Jr.; Lanen, W. N.

    1977-01-01

    For each of the methods identified, a theoretical basis is provided and an illustrative example is described. The example includes sufficient realism and detail to enable an analyst to conduct an evaluation of other systems. The methods discussed in the study include equal capability cost analysis, consumers' surplus, and statistical decision theory.

  7. Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses

    ERIC Educational Resources Information Center

    Christ, Thomas W.

    2009-01-01

    Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…

  8. What Can Mixed Methods Designs Offer Professional Development Program Evaluators?

    ERIC Educational Resources Information Center

    Giordano, Victoria; Nevin, Ann

    2007-01-01

    In this paper, the authors describe the benefits and pitfalls of mixed methods designs. They argue that mixed methods designs may be preferred when evaluating professional development programs for p-K-12 education given the new call for accountability in making data-driven decisions. They summarize and critique the studies in terms of limitations…

  9. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  10. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  11. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  12. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  13. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  14. Study on evaluation methods for Rayleigh wave dispersion characteristic

    USGS Publications Warehouse

    Shi, L.; Tao, X.; Kayen, R.; Shi, H.; Yan, S.

    2005-01-01

    The evaluation of Rayleigh wave dispersion characteristic is the key step for detecting S-wave velocity structure. By comparing the dispersion curves directly with the spectra analysis of surface waves (SASW) method, rather than comparing the S-wave velocity structure, the validity and precision of microtremor-array method (MAM) can be evaluated more objectively. The results from the China - US joint surface wave investigation in 26 sites in Tangshan, China, show that the MAM has the same precision with SASW method in 83% of the 26 sites. The MAM is valid for Rayleigh wave dispersion characteristic testing and has great application potentiality for site S-wave velocity structure detection.

  15. DOE methods for evaluating environmental and waste management samples.

    SciTech Connect

    Goheen, S C; McCulloch, M; Thomas, B L; Riley, R G; Sklarew, D S; Mong, G M; Fadeff, S K

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others.

  16. A quantitative method for evaluating alternatives. [aid to decision making

    NASA Technical Reports Server (NTRS)

    Forthofer, M. J.

    1981-01-01

    When faced with choosing between alternatives, people tend to use a number of criteria (often subjective, rather than objective) to decide which is the best alternative for them given their unique situation. The subjectivity inherent in the decision-making process can be reduced by the definition and use of a quantitative method for evaluating alternatives. This type of method can help decision makers achieve degree of uniformity and completeness in the evaluation process, as well as an increased sensitivity to the factors involved. Additional side-effects are better documentation and visibility of the rationale behind the resulting decisions. General guidelines for defining a quantitative method are presented and a particular method (called 'hierarchical weighted average') is defined and applied to the evaluation of design alternatives for a hypothetical computer system capability.

  17. System and method for evaluating a wire conductor

    DOEpatents

    Panozzo, Edward; Parish, Harold

    2013-10-22

    A method of evaluating an electrically conductive wire segment having an insulated intermediate portion and non-insulated ends includes passing the insulated portion of the wire segment through an electrically conductive brush. According to the method, an electrical potential is established on the brush by a power source. The method also includes determining a value of electrical current that is conducted through the wire segment by the brush when the potential is established on the brush. The method additionally includes comparing the value of electrical current conducted through the wire segment with a predetermined current value to thereby evaluate the wire segment. A system for evaluating an electrically conductive wire segment is also disclosed.

  18. Evaluation of verification methods for input-accountability measurements

    SciTech Connect

    Maeck, W. J.

    1980-01-01

    As part of TASTEX related programs two independent methods have been evaluated for the purpose of providing verification of the amount of Pu charged to the head-end of a nuclear fuel processing plant. The first is the Pu/U (gravimetric method), TASTEX Task-L, and the second is the Tracer Method, designated Task-M. Summaries of the basic technology, results of various studies under actual plant conditions, future requirements, are given for each of the Tasks.

  19. Effect of particle breakage on cyclic densification of ballast: A DEM approach

    NASA Astrophysics Data System (ADS)

    Thakur, P. K.; Vinod, J. S.; Indraratna, B.

    2010-06-01

    In this paper, an attempt has been made to investigate the effect of particle breakage on densification behaviour of ballast under cyclic loading using Discrete Element Method (DEM). Numerical simulations using PFC2D have been carried out on an assembly of angular particles with and without incorporation of particle breakage. Two-dimensional projection of angular ballast particles were simulated using clusters of bonded circular particles. Degradation of the bonds within a cluster was considered to represent particle breakage. Clump logic was used to make the cluster of particles unbreakable. DEM simulation results highlight that the particle breakage has a profound influence on the cyclic densification behaviour of ballast. The deformation behaviour exhibited by the assembly with breakage is in good agreement with the laboratory experiments. In addition, the evolution of particle displacement vectors clearly explains the breakage mechanism and associated deformations during cyclic loading.

  20. A global vegetation corrected SRTM DEM for use in hazard modelling

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; O'Loughlin, F.; Neal, J. C.; Durand, M. T.; Alsdorf, D. E.; Paiva, R. C. D.

    2015-12-01

    We present the methodology and results from the development of a near-global 'bare-earth' Digital Elevation Model (DEM) derived from the Shuttle Radar Topography Mission (SRTM) data. Digital Elevation Models are the most important input for hazard modelling, as the DEM quality governs the accuracy of the model outputs. While SRTM is currently the best near-globally [60N to 60S] available DEM, it requires adjustments to reduce the vegetation contamination and make it useful for hazard modelling over heavily vegetated areas (e.g. tropical wetlands). Unlike previous methods of accounting for vegetation contamination, which concentrated on correcting relatively small areas and usually applied a static adjustment, we account for vegetation contamination globally and apply a spatial varying correction, based on information about canopy height and density. Our new 'Bare-Earth' SRTM DEM combines multiple remote sensing datasets, including ICESat GLA14 ground elevations, the vegetation continuous field dataset as a proxy for penetration depth of SRTM and a global vegetation height map, to remove the vegetation artefacts present in the original SRTM DEM. In creating the final 'bare-earth' SRTM DEM dataset, we produced three different 'bare-earth' SRTM products. The first applies global parameters, while the second and third products apply parameters that are regionalised based on either climatic zones or vegetation types, respectively. We also tested two different canopy density proxies of different spatial resolution. Using ground elevations obtained from the ICESat GLA14 satellite altimeter, we calculate the residual errors for the raw SRTM and the three 'bare-earth' SRTM products and compare performances. The three 'bare-earth' products all show large improvements over the raw SRTM in vegetated areas with the overall mean bias reduced by between 75 and 92% from 4.94 m to 0.40 m. The overall standard deviation is reduced by between 29 and 33 % from 7.12 m to 4.80 m. As

  1. A new method for evaluating wax inhibitors and drag reducers

    SciTech Connect

    Hsu, J.J.C.; Brubaker, J.P.

    1995-12-01

    Conventional wax inhibitor evaluation methods such as cold finger and laminar flow loop are not adequate and accurate for evaluating wax inhibitors to be used in a wide operating temperature range and flow regimes such as North Sea subsea transport pipelines. A new method has been developed to simultaneously measure fluid rheology change and wax inhibition and to evaluate wax inhibitors or drag reducers at the field operating conditions. Selection criteria have been defined to search for an effective wax inhibitor. The criteria ensure the chemical selected is the most effective one for the specific oil and flow conditions. The operation cost savings by this accurate method is significant. Nine chemical companies joined the project of finding an wax inhibitor for a North Sea prospect. More than twenty wax inhibitors have been tested and evaluated with this new method for several waxy oil fields. The new method provides data of fluid rheology, war deposition rates and wax inhibition in the operating temperature range, overall average wax inhibition and degree of fluid flow improvement. These data are important to evaluate a wax inhibitor or drag reducer. Most of the wax inhibitors tested have good wax inhibition at high temperatures, but not many chemicals work well at low temperatures. The chemical tested may improved fluid flow behavior at low temperature but not wax deposition. Drag reducers tested did not work well at North Sea seabed temperature.

  2. DEM Simulation of Particle Clogging in Fiber Filtration

    NASA Astrophysics Data System (ADS)

    Tao, Ran; Yang, Mengmeng; Li, Shuiqing

    2015-11-01

    The formation of porous particle deposits plays a crucial role in determining the efficiency of filtration process. In this work, an adhesive discrete element method (DEM), in combination with CFD, is developed to dynamically describe these porous deposit structures and the changed flow field between two parallel fibers under the periodic boundary conditions. For the first time, it is clarified that the structures of clogged particles are dependent on both the adhesion parameter (defined as the ratio of interparticle adhesion to particle inertia) and the Stokes number (as an index of impaction efficiency). The relationship between the pressure-drop gradient and the coordination number along the filtration time is explored, which can be used to quantitatively classify the different filtration regimes, i.e., clean filter stage, clogging stage and cake filtration stage. Finally, we investigate the influence of the fiber separation distance on the particle clogging behavior, which affects the collecting efficiency of the fibers significantly. The results suggest that changing the arrangement of fibers can improve the filter performance. This work has been funded by the National Key Basic Research and Development Program (2013CB228506).

  3. User Experience Evaluation Methods in Product Development (UXEM'09)

    NASA Astrophysics Data System (ADS)

    Roto, Virpi; Väänänen-Vainio-Mattila, Kaisa; Law, Effie; Vermeeren, Arnold

    High quality user experience (UX) has become a central competitive factor of product development in mature consumer markets [1]. Although the term UX originated from industry and is a widely used term also in academia, the tools for managing UX in product development are still inadequate. A prerequisite for designing delightful UX in an industrial setting is to understand both the requirements tied to the pragmatic level of functionality and interaction and the requirements pertaining to the hedonic level of personal human needs, which motivate product use [2]. Understanding these requirements helps managers set UX targets for product development. The next phase in a good user-centered design process is to iteratively design and evaluate prototypes [3]. Evaluation is critical for systematically improving UX. In many approaches to UX, evaluation basically needs to be postponed until the product is fully or at least almost fully functional. However, in an industrial setting, it is very expensive to find the UX failures only at this phase of product development. Thus, product development managers and developers have a strong need to conduct UX evaluation as early as possible, well before all the parts affecting the holistic experience are available. Different types of products require evaluation on different granularity and maturity levels of a prototype. For example, due to its multi-user characteristic, a community service or an enterprise resource planning system requires a broader scope of UX evaluation than a microwave oven or a word processor that is meant for a single user at a time. Before systematic UX evaluation can be taken into practice, practical, lightweight UX evaluation methods suitable for different types of products and different phases of product readiness are needed. A considerable amount of UX research is still about the conceptual frameworks and models for user experience [4]. Besides, applying existing usability evaluation methods (UEMs) without

  4. [An Introduction to Methods for Evaluating Health Care Technology].

    PubMed

    Lee, Ting-Ting

    2015-06-01

    The rapid and continual advance of healthcare technology makes ensuring that this technology is used effectively to achieve its original goals a critical issue. This paper presents three methods that may be applied by healthcare professionals in the evaluation of healthcare technology. These methods include: the perception/experiences of users, user work-pattern changes, and chart review or data mining. The first method includes two categories: using interviews to explore the user experience and using theory-based questionnaire surveys. The second method applies work sampling to observe the work pattern changes of users. The last method conducts chart reviews or data mining to analyze the designated variables. In conclusion, while evaluative feedback may be used to improve the design and development of healthcare technology applications, the informatics competency and informatics literacy of users may be further explored in future research. PMID:26073952

  5. DEM simulation for landslide process and barrier dam formation on the mountainous highway

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Kai; Lee, Ching-Fang; Wei, Lun-Wei; Chou, Hsien-Ter; Chu, Sheng-Shin

    2013-04-01

    A barrier dam induced by landslide in Hanyuan, Sichuan, China occurred on August 6th, 2009. An approximately 9x106 m3 sliding mass dumped rapidly into the Dadu River and buried the new highway S306. After the major landslide, the large-scale debris mass caused the secondary shallow avalanche at the opposite bank and even formed a barrier dam with a length of 100 m and a height of 40 m crossing the Dadu River. The corresponding backwater effect submerged the upstream village over 10 km. This study adopts DEM simulation to examine the dynamic landslide process and understand the triggering mechanism of barrier dam. Based on the numerical investigation, the results showed that the sliding behavior can be classified into three stages: first initial stage with high potential energy, primary sliding with fast velocity, and final stage of impacting river channel. In addition, the energy balance principle for dynamic landslide is also verified with the DEM simulation. With respect to the consideration of hazard managements, one hopes the result can assist engineers to evaluate dangerous potential region and plan protecting construction on the steep mountainous area. Keywords:Landslide, barrier dam, DEM, dynamic process, backwater.

  6. Using analytic network process for evaluating mobile text entry methods.

    PubMed

    Ocampo, Lanndon A; Seva, Rosemary R

    2016-01-01

    This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods. PMID:26360215

  7. Aster Global dem Version 3, and New Aster Water Body Dataset

    NASA Astrophysics Data System (ADS)

    Abrams, M.

    2016-06-01

    In 2016, the US/Japan ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) project released Version 3 of the Global DEM (GDEM). This 30 m DEM covers the earth's surface from 82N to 82S, and improves on two earlier versions by correcting some artefacts and filling in areas of missing DEMs by the acquisition of additional data. The GDEM was produced by stereocorrelation of 2 million ASTER scenes and operation on a pixel-by-pixel basis: cloud screening; stacking data from overlapping scenes; removing outlier values, and averaging elevation values. As previously, the GDEM is packaged in ~ 23,000 1 x 1 degree tiles. Each tile has a DEM file, and a NUM file reporting the number of scenes used for each pixel, and identifying the source for fill-in data (where persistent clouds prevented computation of an elevation value). An additional data set was concurrently produced and released: the ASTER Water Body Dataset (AWBD). This is a 30 m raster product, which encodes every pixel as either lake, river, or ocean; thus providing a global inland and shore-line water body mask. Water was identified through spectral analysis algorithms and manual editing. This product was evaluated against the Shuttle Water Body Dataset (SWBD), and the Landsat-based Global Inland Water (GIW) product. The SWBD only covers the earth between about 60 degrees north and south, so it is not a global product. The GIW only delineates inland water bodies, and does not deal with ocean coastlines. All products are at 30 m postings.

  8. Study on Turbulent Modeling in Gas Entrainment Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Ohshima, Hiroyuki; Nakamine, Yoshiaki; Imai, Yasutomo

    Suppression of gas entrainment (GE) phenomena caused by free surface vortices are very important to establish an economically superior design of the sodium-cooled fast reactor in Japan (JSFR). However, due to the non-linearity and/or locality of the GE phenomena, it is not easy to evaluate the occurrences of the GE phenomena accurately. In other words, the onset condition of the GE phenomena in the JSFR is not predicted easily based on scaled-model and/or partial-model experiments. Therefore, the authors are developing a CFD-based evaluation method in which the non-linearity and locality of the GE phenomena can be considered. In the evaluation method, macroscopic vortex parameters, e.g. circulation, are determined by three-dimensional CFD and then, GE-related parameters, e.g. gas core (GC) length, are calculated by using the Burgers vortex model. This procedure is efficient to evaluate the GE phenomena in the JSFR. However, it is well known that the Burgers vortex model tends to overestimate the GC length due to the lack of considerations on some physical mechanisms. Therefore, in this study, the authors develop a turbulent vortex model to evaluate the GE phenomena more accurately. Then, the improved GE evaluation method with the turbulent viscosity model is validated by analyzing the GC lengths observed in a simple experiment. The evaluation results show that the GC lengths analyzed by the improved method are shorter in comparison to the original method, and give better agreement with the experimental data.

  9. [Evaluation of the 360-degree assessment method in a hospital].

    PubMed

    Møller, Lars Bo Krag; Ejlskov, Morten Wolff

    2008-09-15

    The present study examines the acceptability of the 360-degree assessment method as a means for evaluating the management and leadership competencies of the clinical staff of a university hospital. Twenty-eight consultants and registered nurses underwent evaluation. One group had debriefing with management consultants, the other with the head of the clinical department. Two months later, the applicability of the method was assessed. The strengths and weaknesses of the leaders were exposed, and areas for improvement were made visible, and acceptance of the method was widespread. Anonymity was required. The group coached by management consultants tended to benefit the most from the evaluation. Using a web-based solution to collect the data was unproblematic. PMID:18808752

  10. Evaluation of AMOEBA: a spectral-spatial classification method

    USGS Publications Warehouse

    Jenson, Susan K.; Loveland, Thomas R.; Bryant, J.

    1982-01-01

    Muitispectral remotely sensed images have been treated as arbitrary multivariate spectral data for purposes of clustering and classifying. However, the spatial properties of image data can also be exploited. AMOEBA is a clustering and classification method that is based on a spatially derived model for image data. In an evaluation test, Landsat data were classified with both AMOEBA and a widely used spectral classifier. The test showed that irrigated crop types can be classified as accurately with the AMOEBA method as with the generally used spectral method ISOCLS; the AMOEBA method, however, requires less computer time.

  11. A Method for Evaluating Dynamical Friction in Linear Ball Bearings

    PubMed Central

    Fujii, Yusaku; Maru, Koichi; Jin, Tao; Yupapin, Preecha P.; Mitatha, Somsak

    2010-01-01

    A method is proposed for evaluating the dynamical friction of linear bearings, whose motion is not perfectly linear due to some play in its internal mechanism. In this method, the moving part of a linear bearing is made to move freely, and the force acting on the moving part is measured as the inertial force given by the product of its mass and the acceleration of its centre of gravity. To evaluate the acceleration of its centre of gravity, the acceleration of two different points on it is measured using a dual-axis optical interferometer. PMID:22163457

  12. Mixed Methods: A Paradigm for Holistic Evaluation of Health IT.

    PubMed

    Scott, Philip J

    2016-01-01

    This contribution offers an overview of the 'third research paradigm', its historical roots and its relevance for health informatics. Using illustrative studies, we explore the concepts of triangulation and integration of quantitative and qualitative data and refute common philosophical objections to mixing different types of knowledge. We consider how the mixed method paradigm relates to two programme design and evaluation frameworks that are important for health informatics: realist evaluation and Theory of Change. We discuss how to manage practical challenges to this approach and explain how mixed method studies support an evidence-based approach to real world policy, planning and investment decisions. PMID:27198096

  13. Evaluating variable selection methods for diagnosis of myocardial infarction.

    PubMed

    Dreiseitl, S; Ohno-Machado, L; Vinterbo, S

    1999-01-01

    This paper evaluates the variable selection performed by several machine-learning techniques on a myocardial infarction data set. The focus of this work is to determine which of 43 input variables are considered relevant for prediction of myocardial infarction. The algorithms investigated were logistic regression (with stepwise, forward, and backward selection), backpropagation for multilayer perceptrons (input relevance determination), Bayesian neural networks (automatic relevance determination), and rough sets. An independent method (self-organizing maps) was then used to evaluate and visualize the different subsets of predictor variables. Results show good agreement on some predictors, but also variability among different methods; only one variable was selected by all models. PMID:10566358

  14. Evaluation of a rapid method of determination of plasma fibrinogen.

    PubMed

    Thomson, G W; McSherry, B J; Valli, V E

    1974-07-01

    An evaluation was made of a rapid semiautomated method of determining fibrinogen levels in bovine plasma. This method, the fibrometer method of Morse, Panek and Menga (8), is based on the principle that when thrombin is added to suitably diluted plasma the time of clotting is linearly related to the fibrinogen concentration. A standard curve prepared using bovine plasma had an r value of .9987 and analysis of variance showed there was no significant deviation from regression. A comparison of the fibrometer method and the biuret method of Ware, Guest and Seegers done on 158 bovine plasma samples showed good correlation between the two methods. It was concluded that the fibrometer method does measure bovine fibrinogen and has considerable merit for use in clinical diseases of cattle. PMID:4277474

  15. Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods

    PubMed Central

    2010-01-01

    Background Schools are key settings for health promotion (HP) but the development of suitable approaches for evaluating HP in schools is still a major topic of discussion. This article presents a research protocol of a program developed to evaluate HP. After reviewing HP evaluation issues, the various possible approaches are analyzed and the importance of a realistic evaluation framework and a mixed methods (MM) design are demonstrated. Methods/Design The design is based on a systemic approach to evaluation, taking into account the mechanisms, context and outcomes, as defined in realistic evaluation, adjusted to our own French context using an MM approach. The characteristics of the design are illustrated through the evaluation of a nationwide HP program in French primary schools designed to enhance children's social, emotional and physical health by improving teachers' HP practices and promoting a healthy school environment. An embedded MM design is used in which a qualitative data set plays a supportive, secondary role in a study based primarily on a different quantitative data set. The way the qualitative and quantitative approaches are combined through the entire evaluation framework is detailed. Discussion This study is a contribution towards the development of suitable approaches for evaluating HP programs in schools. The systemic approach of the evaluation carried out in this research is appropriate since it takes account of the limitations of traditional evaluation approaches and considers suggestions made by the HP research community. PMID:20109202

  16. Comprehensive evaluation of compressor oils by laboratory methods

    SciTech Connect

    Ershova, A.N.; Smertyak, Yu.L.; Lukashenko, E.V.; Kozhekin, A.V.; Bauman, V.N.

    1987-01-01

    The present work has been aimed at a comprehensive investigation of the carbon deposition, antioxidant, anticorrosion, and antiwear properties of a number of compressor oils, and at establishing a correlation between the results obtained in evaluations of high-temperature properties by different methods. The basic physicochemical properties and group compositions of the test oils are shown, which includes oils for reciprocating air compressors and other oils. Results are presented which were obtained in determinations of the carbon deposition tendency of compressor oils by two methods, the test-stand method and the laboratory method. Results are also shown from evaluations of the lubricity of the compressor oils and their anticorrosion properties at high temperature. The greatest viscosity increase is observed for the oil from medium-sulfur crudes with a high content of aromatic hydrocarbons, the smallest viscosity increase for oils with a naphthemic-paraffinic base.

  17. Contemporary ice-elevation changes on central Chilean glaciers using SRTM1 and high-resolution DEMs

    NASA Astrophysics Data System (ADS)

    Vivero, Sebastian; MacDonell, Shelley

    2016-04-01

    Glaciers located in central Chile have undergone significant retreat in recent decades. Whilst studies have evaluated area loss of several glaciers, there are no detailed studies of volume losses. This lack of information restricts not only estimations of current and future contributions to sea level rise, but also has limited the evaluation of freshwater resource availability in the region. Recently, the Chilean Water Directorate has supported the collection of field and remotely sensed data in the region which has enabled glacier changes to be evaluated in greater detail. This study aims to compare high-resolution laser scanning DEMs acquired by the Chilean Water Directorate in April 2015 with the recently released SRTM 1 arc-second DEM (˜30 m) acquired in February 2000 to calculate geodetic mass balance changes for three glaciers in a catchment in central Chile over a 15-year period. Detailed analysis of the SRTM and laser scanning DEMs, together with the glacier outlines enable the quantification of elevation and volume changes. Glacier outlines from February 2000 were obtained using the multispectral analysis of a Landsat TM image, whereas outlines from April 2015 were digitised from high resolution glacier orthophotomosaics. Additionally, we accounted for radar penetration into snow and/or ice by evaluating elevation differences between SRTM C-and X-bands, as well as mis-registration between SRTM DEM and the high-resolution DEMs. Over the period all glaciers show similar ice wastage in the order of 0.03 km3 for the debris-covered and non-covered glaciers. However, whilst on the non-covered glaciers mass loss is largely related to elevation and the addition of surface sediment, on the debris-covered glacier, losses are related to the development of thermokarst features. By analysing the DEM in conjunction with Landsat images, we have detected changes in the sediment cover of the non-covered glaciers, which is likely to change the behaviour of the surface mass

  18. Environment-sensitive fracture: Evaluation and comparison of test methods

    SciTech Connect

    Dean, S.W.; Pugh, E.N.; Ugiansky, G.M

    1984-01-01

    These proceedings collect papers on metal fracture mechanics. Titles include: A Fracture Mechanics Model for Iodine Stress Corrosion Crack Propagation in Zircaloy Tubing; Evaluation of SCC Test Methods for Inconel 600 in Low-temperature Aqueous Solutions; Automated Corrosion Fatigue Crack Growth Testing in Pressurized Water Environments; and use of a constant /Delta//kappa/ test method in the investigation of fatigue crack growth in 288/sup 0/C water environments.

  19. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  20. Novel application of DEM to modelling comminution processes

    NASA Astrophysics Data System (ADS)

    Delaney, Gary W.; Cleary, Paul W.; Sinnott, Matt D.; Morrison, Rob D.

    2010-06-01

    Comminution processes in which grains are broken down into smaller and smaller sizes represent a critical component in many industries including mineral processing, cement production, food processing and pharmaceuticals. We present a novel DEM implementation capable of realistically modelling such comminution processes. This extends on a previous implementation of DEM particle breakage that utilized spherical particles. Our new extension uses super-quadric particles, where daughter fragments with realistic size and shape distributions are packed inside a bounding parent super-quadric. We demonstrate the flexibility of our approach in different particle breakage scenarios and examine the effect of the chosen minimum resolved particle size. This incorporation of the effect of particle shape in the breakage process allows for more realistic DEM simulations to be performed, that can provide additional fundamental insights into comminution processes and into the behaviour of individual pieces of industrial machinery.

  1. DOE methods for evaluating environmental and waste management samples

    SciTech Connect

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

  2. Evaluation of Two Methods to Estimate and Monitor Bird Populations

    PubMed Central

    Taylor, Sandra L.; Pollard, Katherine S.

    2008-01-01

    Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775

  3. Force Evaluation in the Lattice Boltzmann Method Involving Curved Geometry

    NASA Technical Reports Server (NTRS)

    Mei, Renwei; Yu, Dazhi; Shyy, Wei; Luo, Li-Shi; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The present work investigates two approaches for force evaluation in the lattice Boltzmann equation: the momentum- exchange method and the stress-integration method on the surface of a body. The boundary condition for the particle distribution functions on curved geometries is handled with second order accuracy based on our recent works. The stress-integration method is computationally laborious for two-dimensional flows and in general difficult to implement for three-dimensional flows, while the momentum-exchange method is reliable, accurate, and easy to implement for both two-dimensional and three-dimensional flows. Several test cases are selected to evaluate the present methods, including: (i) two-dimensional pressure-driven channel flow; (ii) two-dimensional uniform flow past a column of cylinders; (iii) two-dimensional flow past a cylinder asymmetrically placed in a channel (with vortex shedding); (iv) three-dimensional pressure-driven flow in a circular pipe; and (v) three-dimensional flow past a sphere. The drag evaluated by using the momentum-exchange method agrees well with the exact or other published results.

  4. Force evaluation in the lattice Boltzmann method involving curved geometry

    NASA Astrophysics Data System (ADS)

    Mei, Renwei; Yu, Dazhi; Shyy, Wei; Luo, Li-Shi

    2002-04-01

    The present work investigates two approaches for force evaluation in the lattice Boltzmann equation: the momentum-exchange method and the stress-integration method on the surface of a body. The boundary condition for the particle distribution functions on curved geometries is handled with second-order accuracy based on our recent works [Mei et al., J. Comput. Phys. 155, 307 (1999); ibid. 161, 680 (2000)]. The stress-integration method is computationally laborious for two-dimensional flows and in general difficult to implement for three-dimensional flows, while the momentum-exchange method is reliable, accurate, and easy to implement for both two-dimensional and three-dimensional flows. Several test cases are selected to evaluate the present methods, including: (i) two-dimensional pressure-driven channel flow; (ii) two-dimensional uniform flow past a column of cylinders; (iii) two-dimensional flow past a cylinder asymmetrically placed in a channel (with vortex shedding); (iv) three-dimensional pressure-driven flow in a circular pipe; and (v) three-dimensional flow past a sphere. The drag evaluated by using the momentum-exchange method agrees well with the exact or other published results.

  5. Visualization of vasculature with convolution surfaces: method, validation and evaluation.

    PubMed

    Oeltze, Steffen; Preim, Bernhard

    2005-04-01

    We present a method for visualizing vasculature based on clinical computed tomography or magnetic resonance data. The vessel skeleton as well as the diameter information per voxel serve as input. Our method adheres to these data, while producing smooth transitions at branchings and closed, rounded ends by means of convolution surfaces. We examine the filter design with respect to irritating bulges, unwanted blending and the correct visualization of the vessel diameter. The method has been applied to a large variety of anatomic trees. We discuss the validation of the method by means of a comparison to other visualization methods. Surface distance measures are carried out to perform a quantitative validation. Furthermore, we present the evaluation of the method which has been accomplished on the basis of a survey by 11 radiologists and surgeons. PMID:15822811

  6. Program Evaluation of the Sustainability of Teaching Methods

    ERIC Educational Resources Information Center

    Bray, Cathy

    2008-01-01

    This paper suggests a particular question that higher education researchers might ask: "Do educational programs use teaching methods that are environmentally, socially and economically sustainable?" It further proposes that program evaluation research (PER) can be used to answer the question. Consideration is given to: a) program evaluation…

  7. Student Teachers' Views about Assessment and Evaluation Methods in Mathematics

    ERIC Educational Resources Information Center

    Dogan, Mustafa

    2011-01-01

    This study aimed to find out assessment and evaluation approaches in a Mathematics Teacher Training Department based on the views and experiences of student teachers. The study used a descriptive survey method, with the research sample consisting of 150 third- and fourth-year Primary Mathematics student teachers. Data were collected using a…

  8. Evaluation of methods for nondestructive testing of brazed joints

    NASA Technical Reports Server (NTRS)

    Kanno, A.

    1968-01-01

    Evaluation of nondestructive methods of testing brazed joints reveals that ultrasonic testing is effective in the detection of nonbonds in diffusion bonded samples. Radiography provides excellent resolutions of void or inclusion defects, and the neutron radiographic technique shows particular advantage for brazing materials containing cadmium.

  9. Comparison of two common methods of surface-topography evaluation

    SciTech Connect

    Gauler, A.L.

    1981-01-01

    Some of the advantages and limitations of two methods used for surface topography evaluation, the dual beam interference microscope and the stylus type profiling instrument, are compared. Consideration is primarily limited to diamond machined or other high quality surfaces, such as are commonly encountered on optical elements. Parameters discussed include horizontal and vertical resolution, horizontal and vertical range, and surface damage.

  10. IMPROVEMENT AND EVALUATION OF METHODS FOR SULFATE ANALYSIS

    EPA Science Inventory

    A simpler and faster procedure for the manual turbidimetric analysis of sulfate has been developed and evaluated. This method as well as a turbidimetric procedure using SulfaVer(R), automated methylthymol blue (MTB) procedures for analysis in the 0-100 micrograms/ml and 0-10 micr...

  11. Improving Mandatory Tutoring: A Mixed-Methods Program Evaluation

    ERIC Educational Resources Information Center

    Baggett, Brooks

    2009-01-01

    In recent years, the local school leadership in a suburban southern U.S. high school adopted innovative academic intervention programs to assist underperforming students but did not develop formal methods to evaluate program effectiveness. This gap in the professional practice continued with the inception of mandatory tutoring (MT), an…

  12. DEVELOPMENT OF CRITERIA AND METHODS FOR EVALUATING TRAINER AIRCRAFT EFFECTIVENESS.

    ERIC Educational Resources Information Center

    KUSEWITT, J.B.

    THE PURPOSE OF THIS STUDY WAS TO DEVELOP A METHOD FOR DETERMINING OBJECTIVE MEASURES OF TRAINER AIRCRAFT EFFECTIVENESS TO EVALUATE PROGRAM ALTERNATIVES FOR TRAINING PILOTS FOR FLEET FIGHTER AND ATTACK-TYPE AIRCRAFT. THE TRAINING SYLLABUS WAS BASED ON AVERAGE STUDENT ABILITY. THE BASIC PROBLEM WAS TO ESTABLISH QUANTITATIVE TIME-DIFFICULTY…

  13. Evaluation of Alternative Difference-in-Differences Methods

    ERIC Educational Resources Information Center

    Yu, Bing

    2013-01-01

    Difference-in-differences (DID) strategies are particularly useful for evaluating policy effects in natural experiments in which, for example, a policy affects some schools and students but not others. However, the standard DID method may produce biased estimation of the policy effect if the confounding effect of concurrent events varies by…

  14. An Evaluation of a New Method of IRT Scaling

    ERIC Educational Resources Information Center

    Ragland, Shelley

    2010-01-01

    In order to be able to fairly compare scores derived from different forms of the same test within the Item Response Theory framework, all individual item parameters must be on the same scale. A new approach, the RPA method, which is based on transformations of predicted score distributions was evaluated here and was shown to produce results…

  15. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  16. METHODS FOR EVALUATING THE SUSTAINABILITY OF GREEN PROCESSES

    EPA Science Inventory

    Methods for Evaluating the Sustainability of Green Processes

    By Raymond L. Smith and Michael A. Gonzalez
    U.S. Environmental Protection Agency
    Office of Research and Development
    26 W. Martin Luther King Dr.
    Cincinnati, OH 45268 USA

    Theme: New Challenges...

  17. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    SciTech Connect

    Koning, A.J.

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  18. Sensitivity evaluation of dynamic speckle activity measurements using clustering methods

    SciTech Connect

    Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H.

    2010-07-01

    We evaluate and compare the use of competitive neural networks, self-organizing maps, the expectation-maximization algorithm, K-means, and fuzzy C-means techniques as partitional clustering methods, when the sensitivity of the activity measurement of dynamic speckle images needs to be improved. The temporal history of the acquired intensity generated by each pixel is analyzed in a wavelet decomposition framework, and it is shown that the mean energy of its corresponding wavelet coefficients provides a suited feature space for clustering purposes. The sensitivity obtained by using the evaluated clustering techniques is also compared with the well-known methods of Konishi-Fujii, weighted generalized differences, and wavelet entropy. The performance of the partitional clustering approach is evaluated using simulated dynamic speckle patterns and also experimental data.

  19. A Safety Index and Method for Flightdeck Evaluation

    NASA Technical Reports Server (NTRS)

    Latorella, Kara A.

    2000-01-01

    If our goal is to improve safety through machine, interface, and training design, then we must define a metric of flightdeck safety that is usable in the design process. Current measures associated with our notions of "good" pilot performance and ultimate safety of flightdeck performance fail to provide an adequate index of safe flightdeck performance for design evaluation purposes. The goal of this research effort is to devise a safety index and method that allows us to evaluate flightdeck performance holistically and in a naturalistic experiment. This paper uses Reason's model of accident causation (1990) as a basis for measuring safety, and proposes a relational database system and method for 1) defining a safety index of flightdeck performance, and 2) evaluating the "safety" afforded by flightdeck performance for the purpose of design iteration. Methodological considerations, limitations, and benefits are discussed as well as extensions to this work.

  20. Discrete Element Modeling (DEM) of Triboelectrically Charged Particles: Revised Experiments

    NASA Technical Reports Server (NTRS)

    Hogue, Michael D.; Calle, Carlos I.; Curry, D. R.; Weitzman, P. S.

    2008-01-01

    In a previous work, the addition of basic screened Coulombic electrostatic forces to an existing commercial discrete element modeling (DEM) software was reported. Triboelectric experiments were performed to charge glass spheres rolling on inclined planes of various materials. Charge generation constants and the Q/m ratios for the test materials were calculated from the experimental data and compared to the simulation output of the DEM software. In this paper, we will discuss new values of the charge generation constants calculated from improved experimental procedures and data. Also, planned work to include dielectrophoretic, Van der Waals forces, and advanced mechanical forces into the software will be discussed.

  1. Measuring the effectiveness of methods for evaluating noise jammers

    NASA Astrophysics Data System (ADS)

    Hu, Fang; Huang, Jian-Guo

    2007-09-01

    Reliable evaluations of a noise jammer’s effectiveness are necessary to properly design, manufacture, and operate one, so it is important to have an evaluation model. Based on their characteristics and principles, relevant factors were classified in terms of their contribution to a unit’s effectiveness. In this way an evaluation index system was established. In the proposed mathematical model a noise jammer is analyzed by combining the model of system effectiveness with the method of analytic hierarchical process. A simulation of underwater acoustic countermeasures was used to test the rationality and feasibility of the model. The results showed that this model is an effective way to solve the challenge of evaluating the effectiveness of non-offensive weapons under single working phase.

  2. Holistic Evaluation of Lightweight Operating Systems using the PERCU Method

    SciTech Connect

    Kramer, William T.C.; He, Yun; Carter, Jonathan; Glenski, Joseph; Rippe, Lynn; Cardo, Nicholas

    2008-05-01

    The scale of Leadership Class Systems presents unique challenges to the features and performance of operating system services. This paper reports results of comprehensive evaluations of two Light Weight Operating Systems (LWOS), Cray's Catamount Virtual Node (CVN) and Linux Environment (CLE) operating systems, on the exact same large-scale hardware. The evaluation was carried out over a 5-month period on NERSC's 19,480 core Cray XT-4, Franklin, using a comprehensive evaluation method that spans Performance, Effectiveness, Reliability, Consistency and Usability criteria for all major subsystems and features. The paper presents the results of the comparison between CVN and CLE, evaluates their relative strengths, and reports observations regarding the world's largest Cray XT-4 as well.

  3. Robust flicker evaluation method for low power adaptive dimming LCDs

    NASA Astrophysics Data System (ADS)

    Kim, Seul-Ki; Song, Seok-Jeong; Nam, Hyoungsik

    2015-05-01

    This paper describes a robust dimming flicker evaluation method of adaptive dimming algorithms for low power liquid crystal displays (LCDs). While the previous methods use sum of square difference (SSD) values without excluding the image sequence information, the proposed modified SSD (mSSD) values are obtained only with the dimming flicker effects by making use of differential images. The proposed scheme is verified for eight dimming configurations of two dimming level selection methods and four temporal filters over three test videos. Furthermore, a new figure of merit is introduced to cover the dimming flicker as well as image qualities and power consumption.

  4. Economic evaluation in patient safety: a literature review of methods.

    PubMed

    de Rezende, Bruna Alves; Or, Zeynep; Com-Ruelle, Laure; Michel, Philippe

    2012-06-01

    Patient safety practices, targeting organisational changes for improving patient safety, are implemented worldwide but their costs are rarely evaluated. This paper provides a review of the methods used in economic evaluation of such practices. International medical and economics databases were searched for peer-reviewed publications on economic evaluations of patient safety between 2000 and 2010 in English and French. This was complemented by a manual search of the reference lists of relevant papers. Grey literature was excluded. Studies were described using a standardised template and assessed independently by two researchers according to six quality criteria. 33 articles were reviewed that were representative of different patient safety domains, data types and evaluation methods. 18 estimated the economic burden of adverse events, 3 measured the costs of patient safety practices and 12 provided complete economic evaluations. Healthcare-associated infections were the most common subject of evaluation, followed by medication-related errors and all types of adverse events. Of these, 10 were selected that had adequately fulfilled one or several key quality criteria for illustration. This review shows that full cost-benefit/utility evaluations are rarely completed as they are resource intensive and often require unavailable data; some overcome these difficulties by performing stochastic modelling and by using secondary sources. Low methodological transparency can be a problem for building evidence from available economic evaluations. Investing in the economic design and reporting of studies with more emphasis on defining study perspectives, data collection and methodological choices could be helpful for strengthening our knowledge base on practices for improving patient safety. PMID:22396602

  5. Evaluating a physician leadership development program - a mixed methods approach.

    PubMed

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study. PMID:27119393

  6. Applied methods of testing and evaluation for IR imaging system

    NASA Astrophysics Data System (ADS)

    Liao, Xiao-yue; Lu, Jin

    2009-07-01

    Different methods of testing and evaluation for IR imaging system are used with the application of the 2nd and the 3rd generation infrared detectors. The performance of IR imaging system can be reflected by many specifications, such as Noise Equivalent Temperature Difference (NETD), Nonuniformity, system Modulation Transfer Function (MTF), Minimum Resolvable Temperature Difference (MRTD), and Minimum Detectable Temperature Difference (MRTD) etc. The sensitivity of IR sensors is estimated by NETD. The sensitivity of thermal imaging sensors and space resolution are evaluated by MRTD, which is the chief specification of system. In this paper, the theoretical analysis of different testing methods is introduced. The characteristics of them are analyzed and compared. Based on discussing the factors that affect measurement results, an applied method of testing NETD and MRTD for IR system is proposed.

  7. Recyclability Evaluation Method Considering Material Combination and Degradation

    NASA Astrophysics Data System (ADS)

    Oyasato, Naohiko; Kobayashi, Hideki

    A new method of recyclability evaluation is proposed. The recyclability of a product is given by summing up recyclability of all units to which the product is manually disassembled. The recyclability of a unit is calculated if all names and amounts of materials of which the unit is composed are known. The recyclability of a disassembled unit consisting of multiple materials is judged on the grounds of removability of impurities, miscibility and marketability of polymer blends. Recyclability of a long-lifetime product can be estimated from recyclability of units, which are modeled as probabilistically distributed degradation of materials. The proposed method is applied to recyclability evaluation for a refrigerator with several scenarios of disassembly levels. The practical disassembly scenarios limit the maximum recyclability rate of the product. Therefore, recyclability rates calculated based on the proposed method are considerably lower than those of the recyclable materials of which the product consisted.

  8. Precise baseline determination for the TanDEM-X mission

    NASA Astrophysics Data System (ADS)

    Koenig, Rolf; Moon, Yongjin; Neumayer, Hans; Wermuth, Martin; Montenbruck, Oliver; Jäggi, Adrian

    The TanDEM-X mission will strive for generating a global precise Digital Elevation Model (DEM) by way of bi-static SAR in a close formation of the TerraSAR-X satellite, already launched on June 15, 2007, and the TanDEM-X satellite to be launched in May 2010. Both satellites carry the Tracking, Occultation and Ranging (TOR) payload supplied by the GFZ German Research Centre for Geosciences. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), and a Laser retro-reflector (LRR) for precise orbit determination (POD) and atmospheric sounding. The IGOR is of vital importance for the TanDEM-X mission objectives as the millimeter level determination of the baseline or distance between the two spacecrafts is needed to derive meter level accurate DEMs. Within the TanDEM-X ground segment GFZ is responsible for the operational provision of precise baselines. For this GFZ uses two software chains, first its Earth Parameter and Orbit System (EPOS) software and second the BERNESE software, for backup purposes and quality control. In a concerted effort also the German Aerospace Center (DLR) generates precise baselines independently with a dedicated Kalman filter approach realized in its FRNS software. By the example of GRACE the generation of baselines with millimeter accuracy from on-board GPS data can be validated directly by way of comparing them to the intersatellite K-band range measurements. The K-band ranges are accurate down to the micrometer-level and therefore may be considered as truth. Both TanDEM-X baseline providers are able to generate GRACE baselines with sub-millimeter accuracy. By merging the independent baselines by GFZ and DLR, the accuracy can even be increased. The K-band validation however covers solely the along-track component as the K-band data measure just the distance between the two GRACE satellites. In addition they inhibit an un-known bias which must be modelled in the comparison, so the

  9. Coupling photogrammetric data with DFN-DEM model for rock slope hazard assessment

    NASA Astrophysics Data System (ADS)

    Donze, Frederic; Scholtes, Luc; Bonilla-Sierra, Viviana; Elmouttie, Marc

    2013-04-01

    fracture persistency in order to enhance the possible contribution of rock bridges on the failure surface development. It is believed that the proposed methodology can bring valuable complementary information for rock slope stability analysis in presence of complex fractured system for which classical "Factor of Safety" is difficult to express. References • Harthong B., Scholtès L. & F.V. Donzé, Strength characterization of rock masses, using a coupled DEM-DFN model, Geophysical Journal International, doi: 10.1111/j.1365-246X.2012.05642.x, 2012. • Kozicki J & Donzé FV. YADE-OPEN DEM: an open--source software using a discrete element method to simulate granular material, Engineering Computations, 26(7):786-805, 2009 • Kozicki J, Donzé FV. A new open-source software developed for numerical simulations using discrete modeling methods, Comp. Meth. In Appl. Mech. And Eng. 197:4429-4443, 2008. • Poropat, G.V., New methods for mapping the structure of rock masses. In Proceedings, Explo 2001, Hunter Valley, New South Wales, 28-31 October 2001, pp. 253-260, 2001. • Scholtès, L. & Donzé FV. Modelling progressive failure in fractured rock masses using a 3D discrete element method, International Journal of Rock Mechanics and Mining Sciences, 52:18-30, 2012a. • Scholtès, L. & Donzé, F.-V., DEM model for soft and hard rocks: role of grain interlocking on strength, J. Mech. Phys. Solids, doi: 10.1016/j.jmps.2012.10.005, 2012b. • Sirovision, Commonwealth Scientific and Industrial Research Organisation CSIRO, Siro3D Sirovision 3D Imaging Mapping System Manual Version 4.1, 2010

  10. A new method to evaluate human-robot system performance

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  11. Development of evaluation method for software hazard identification techniques

    SciTech Connect

    Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.

    2006-07-01

    This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)

  12. Evaluation of Hamaker coefficients using Diffusion Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Maezono, Ryo; Hongo, Kenta

    We evaluated the Hamaker's constant for Cyclohexasilane to investigate its wettability, which is used as an ink of 'liquid silicon' in 'printed electronics'. Taking three representative geometries of the dimer coalescence (parallel, lined, and T-shaped), we evaluated these binding curves using diffusion Monte Carlo method. The parallel geometry gave the most long-ranged exponent, ~ 1 /r6 , in its asymptotic behavior. Evaluated binding lengths are fairly consistent with the experimental density of the molecule. The fitting of the asymptotic curve gave an estimation of Hamaker's constant being around 100 [zJ]. We also performed a CCSD(T) evaluation and got almost similar result. To check its justification, we applied the same scheme to Benzene and compared the estimation with those by other established methods, Lifshitz theory and SAPT (Symmetry-adopted perturbation theory). The result by the fitting scheme turned to be twice larger than those by Lifshitz and SAPT, both of which coincide with each other. It is hence implied that the present evaluation for Cyclohexasilane would be overestimated.

  13. A new method to evaluate human-robot system performance.

    PubMed

    Rodriguez, G; Weisbin, C R

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated. PMID:12703512

  14. Operator performance evaluation using multi criteria decision making methods

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah

    2014-06-01

    Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.

  15. Mass Balance of Glaciers In Southern Chile, Based On Dems From Aster and Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Rivera, A.; Casassa, G.; Bown, F.; Fernandez, A.

    The glaciers located in the Chilean southern Andes region (41-51S) have been re- treating and shrinking during most of the last century, in response to a climate warm- ing trend recognised in many climatic stations of the country. During recent years, several calving and small mountain glaciers have been analysed, in an attempt to cor- relate the short historical glacier variation (no longer than 150 years) with long term dendrochronological series (from 300 to 1000 years). The aim of this analysis is to un- derstand climate change during the last millennia, as well as the mechanisms of glacier response to such climatic changes. In this context, mass balance studies are one of the most important approaches to determine the specific relationship of glaciers to annual and decadal climatic changes. In Chile, only one glacier (glaciar Echaurren, 33S) has been systematically measured since 1975, generating the longest mass balance series of the country. To account for the mass balance of glaciers in the southern region of Chile, a geodetic method is presented, based upon the comparison of digital elevation models (DEM) obtained from aerial photographs and ASTER imagery from different dates. This method have been applied to glaciar Chico located at 49S in the Southern Patagonia Icefield, where we have generated DEMs from aerial photographs of 1975 and 1995, as well as one DEM from an ASTER image of October 2001. The DEMs are geo-referenced to a network of GPS points, measured in several field campaigns carried out during recent years at rock outcrops and in the accumulation area of the glacier. The last campaign was done during September and October 2001, allowing a high accuracy ground control validation for DEM derived from the contemporary ASTER image. The mass balance analysis is complemented with frontal variations derived from Landsat TM imagery, as well as field data and aerial photographs. One preliminary result of this study shows a consistent ice thinning, at

  16. A novel objective evaluation method for trunk function

    PubMed Central

    Kinoshita, Kazuaki; Hashimoto, Masashi; Ishida, Kazunari; Yoneda, Yuki; Naka, Yuta; Kitanishi, Hideyuki; Oyagi, Hirotaka; Hoshino, Yuichi; Shibanuma, Nao

    2015-01-01

    [Purpose] To investigate whether an objective evaluation method for trunk function, namely the “trunk righting test”, is reproducible and reliable by testing on different observers (from experienced to beginners) and by confirming the test-retest reliability. [Subjects] Five healthy subjects were evaluated in this correlation study. [Methods] A handheld dynamometer was used in the assessments. The motor task was a trunk righting motion by moving the part with the sensor pad 10 cm outward from the original position. During measurement, the posture was held at maximum effort for 5 s. Measurement was repeated three times. Interexaminer reproducibility was examined in two physical therapists with 1 year experience and one physical therapist with 7 years of experience. The measured values were evaluated for reliability by using intraclass correlation coefficients (ICC 1.1) and interclass correlation coefficients (ICC 2.1). [Results] The test-retest reliability ICC 1.1 and ICC 2.1 were all high. The ICC 1.1 was >0.90. The ICC 2.1 was 0.93. [Conclusion] We developed the trunk righting test as a novel objective evaluation method for trunk function. As the study included inexperienced therapists, the results suggest that the trunk righting test could be used in the clinic, independent of the experience of the therapists. PMID:26157279

  17. An IMU Evaluation Method Using a Signal Grafting Scheme

    PubMed Central

    Niu, Xiaoji; Wang, Qiang; Li, You; Zhang, Quan; Jiang, Peng

    2016-01-01

    As various inertial measurement units (IMUs) from different manufacturers appear every year, it is not affordable to evaluate every IMU through tests. Therefore, this paper presents an IMU evaluation method by grafting data from the tested IMU to the reference data from a higher-grade IMU. The signal grafting (SG) method has several benefits: (a) only one set of field tests with a higher-grade IMU is needed, and can be used to evaluate numerous IMUs. Thus, SG is effective and economic because all data from the tested IMU is collected in the lab; (b) it is a general approach to compare navigation performances of various IMUs by using the same reference data; and, finally, (c) through SG, one can first evaluate an IMU in the lab, and then decide whether to further test it. Moreover, this paper verified the validity of SG to both medium- and low-grade IMUs, and presents and compared two SG strategies, i.e., the basic-error strategy and the full-error strategy. SG provided results similar to field tests, with a difference of under 5% and 19.4%–26.7% for tested tactical-grade and MEMS IMUs. Meanwhile, it was found that dynamic IMU errors were essential to guarantee the effect of the SG method. PMID:27294932

  18. [Methods of evaluating labor progress in contemporary obstetrics].

    PubMed

    Głuszak, Michał; Fracki, Stanisław; Wielgoś, Mirosław; Wegrzyn, Piotr

    2013-08-01

    Assessment of progress in labor is one of the foremost problems in obstetrics. Obstructed labor increases danger to maternal and fetal life and health, and may be caused by birth canal pathologies, as well as inefficient uterine contractions or failure of cervical dilation. Such obstructions require the use of vacuum extraction, forceps, or a Caesarean section. Operative delivery should be performed only when specifically indicated. Conversely postponing an operative delivery when the procedure is necessary is detrimental to the neonatal outcome. Therefore, it is advisable to make the decision on the basis of objective, measurable parameters. Methods of evaluating the risk of labor disorders have evolved over the years. Currently ultrasonography is used for fetal biometric measurements and weight estimation. It helps to evaluate the risk of labor disorders. This method, however is limited by a relatively large measurement error At present, vaginal examination is still the primary method of evaluating labor progress, although the technique is known to be operator-dependent and poorly reproducible. Recent publications suggest that intrapartum translabial ultrasonography is more accurate and allows for an objective assessment of labor progress. Recent studies have evaluated fetal head engagement based on the following parameters: angle between the pubic symphysis and fetal head, distance between the presenting point and the interspinous line and fetal head direction in the birth canal. Each of the described parameters allowed for an objective assessment of head engagement but no advantage of any particular parameter has been revealed so far. PMID:24191505

  19. An IMU Evaluation Method Using a Signal Grafting Scheme.

    PubMed

    Niu, Xiaoji; Wang, Qiang; Li, You; Zhang, Quan; Jiang, Peng

    2016-01-01

    As various inertial measurement units (IMUs) from different manufacturers appear every year, it is not affordable to evaluate every IMU through tests. Therefore, this paper presents an IMU evaluation method by grafting data from the tested IMU to the reference data from a higher-grade IMU. The signal grafting (SG) method has several benefits: (a) only one set of field tests with a higher-grade IMU is needed, and can be used to evaluate numerous IMUs. Thus, SG is effective and economic because all data from the tested IMU is collected in the lab; (b) it is a general approach to compare navigation performances of various IMUs by using the same reference data; and, finally, (c) through SG, one can first evaluate an IMU in the lab, and then decide whether to further test it. Moreover, this paper verified the validity of SG to both medium- and low-grade IMUs, and presents and compared two SG strategies, i.e., the basic-error strategy and the full-error strategy. SG provided results similar to field tests, with a difference of under 5% and 19.4%-26.7% for tested tactical-grade and MEMS IMUs. Meanwhile, it was found that dynamic IMU errors were essential to guarantee the effect of the SG method. PMID:27294932

  20. [A Standing Balance Evaluation Method Based on Largest Lyapunov Exponent].

    PubMed

    Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang; Zhao, Qing

    2015-12-01

    In order to evaluate the ability of human standing balance scientifically, we in this study proposed a new evaluation method based on the chaos nonlinear analysis theory. In this method, a sinusoidal acceleration stimulus in forward/backward direction was forced under the subjects' feet, which was supplied by a motion platform. In addition, three acceleration sensors, which were fixed to the shoulder, hip and knee of each subject, were applied to capture the balance adjustment dynamic data. Through reconstructing the system phase space, we calculated the largest Lyapunov exponent (LLE) of the dynamic data of subjects' different segments, then used the sum of the squares of the difference between each LLE (SSDLLE) as the balance capabilities evaluation index. Finally, 20 subjects' indexes were calculated, and compared with evaluation results of existing methods. The results showed that the SSDLLE were more in line with the subjects' performance during the experiment, and it could measure the body's balance ability to some extent. Moreover, the results also illustrated that balance level was determined by the coordinate ability of various joints, and there might be more balance control strategy in the process of maintaining balance. PMID:27079089

  1. DEM modelling, vegetation characterization and mapping of aspen parkland rangeland using LIDAR data

    NASA Astrophysics Data System (ADS)

    Su, Guangquan

    Detailed geographic information system (GIS) studies on plant ecology, animal behavior and soil hydrologic characteristics across spatially complex landscapes require an accurate digital elevation model (DEM). Following interpolation of last return LIDAR data and creation of a LIDAR-derived DEM, a series of 260 points, stratified by vegetation type, slope gradient and off-nadir distance, were ground-truthed using a total laser station, GPS, and 27 interconnected benchmarks. Despite an overall mean accuracy of +2 cm across 8 vegetation types, it created a RMSE (square root of the mean square error) of 1.21 m. DEM elevations were over-estimated within forested areas by an average of 20 cm with a RMSE of 1.05 m, under-estimated (-12 cm, RMSE = 1.36 m) within grasslands. Vegetation type had the greatest influence on DEM accuracy, while off-nadir distance (P = 0.48) and slope gradient (P = 0.49) did not influence DEM accuracy; however, the latter factors did interact (P < 0.10) to effect accuracy. Vegetation spatial structure (i.e., physiognomy) including plant height, cover, and vertical or horizontal heterogeneity, are important factors influencing biodiversity. Vegetation over and understory were sampled for height, canopy cover, and tree or shrub density within 120 field plots, evenly stratified by vegetation formation (grassland, shrubland, and aspen forest). Results indicated that LIDAR data could be used for estimating the maximum height, cover, and density, of both closed and semi-open stands of aspen (P < 0.001). However, LIDAR data could not be used to assess understory (<1.5 m) height within aspen stands, nor grass height and cover. Recognition and mapping of vegetation types are important for rangelands as they provide a basis for the development and evaluation of management policies and actions. In this study, LIDAR data were found to be superior to digital classification schedules for their mapping accuracy in aspen forest and grassland, but not shrubland

  2. Wiener spectrum of radiographic systems: comparison of different evaluation methods.

    PubMed

    Bregant, P; De Denaro, M; De Guarrini, F; Borasi, G

    1997-05-01

    The noise power spectrum, or Wiener spectrum, of the radiographic mottle is a fundamental quantity in film-screen image quality evaluation. In this paper, using a high-quality computerized microdensitometer, two different acquisition and calculation methods for noise evaluation are compared. The first one is the classic (unidimensional) method used in film noise evaluation: a long and narrow slit (10 x 400 microns2) is used to delimit the microdensitometer light beam and the transmission data are collected by scanning the sample in a rectilinear pattern. A section of the two-dimensional Wiener spectrum is thus obtained. The second (two-dimensional) method is similar to that used in digital image noise evaluation: a square slit is used on the microdensitometer window and data are collected by scanning the sample on a square pattern. To evaluate the effect of different sampling frequencies, our data were acquired both selecting a 50 x 50 microns2 square slit and a 20 x 20 microns2 square slit. The two-dimensional Wiener spectrum thus obtained is then reduced to a unidimensional function. The measurements were made on two different films (Kodak Ortho G e Kodak T-MAT G) exposed with the same screen (Kodak Lanex Regular). These films have the same sensitivity but a different emulsion structure. One film (Ortho G) is made of irregular halide silver grains and the other (T-MAT G) of tabular grains. A satisfactory agreement between the two procedures was found which makes the comparison of data from the laboratories using microdensitometers and those using TV-grabbing system for film-screen evaluation meaningful. PMID:9251741

  3. Comparative evaluation of patellar height methods in the Brazilian population☆

    PubMed Central

    Behrendt, Christian; Zaluski, Alexandre; e Albuquerque, Rodrigo Pires; de Sousa, Eduardo Branco; Cavanellas, Naasson

    2015-01-01

    Objective The methods most used for patellar height measurement were compared with the plateau–patella angle method. Methods A cross-sectional study was conducted, in which lateral-view radiographs of the knee were evaluated using the three methods already established in the literature: Insall–Salvati (IS), Blackburne–Peel (BP) and Caton–Deschamps (CD). These were compared with the plateau–patella angle method. One hundred and ninety-six randomly selected patients were included in the sample. Results The data were initially evaluated using the chi-square test. This analysis was deemed to be positive with p < 0.0001. We compared the traditional methods with the plateau–patella angle measurement, using Fisher's exact test. In comparing the IS index with the plateau–patella angle, we did not find any statistically significant differences in relation to the proportion of altered cases between the two groups. The traditional methods were compared with the plateau–patella angle with regard to the proportions of cases of high and low patella, by means of Fisher's exact test. This analysis showed that the plateau–patella angle identified fewer cases of high patella than did the IS, BP and CD methods, but more cases of low patella. In comparing pairs, we found that the IS and CD indices were capable of identifying more cases of high patella than was the plateau–patella angle. In relation to the cases of low patella, the plateau–patella angle was capable of identifying more cases than were the other three methods. Conclusions The plateau–patella angle found more patients with low patella than did the classical methods and showed results that diverged from those of the other indices studied. PMID:26962492

  4. A novel method to evaluate spin diffusion length of Pt

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-qing; Sun, Niu-yi; Che, Wen-ru; Shan, Rong; Zhu, Zhen-gang

    2016-05-01

    Spin diffusion length of Pt is evaluated via proximity effect of spin orbit coupling (SOC) and anomalous Hall effect (AHE) in Pt/Co2FeAl bilayers. By varying the thicknesses of Pt and Co2FeAl layer, the thickness dependences of AHE parameters can be obtained, which are theoretically predicted to be proportional to the square of the SOC strength. According to the physical image of the SOC proximity effect, the spin diffusion length of Pt can easily be identified from these thickness dependences. This work provides a novel method to evaluate spin diffusion length in a material with a small value.

  5. Initial Results of an MDO Method Evaluation Study

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley MDO method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of re- producible experiments. In the first phase of the study, three MDO methods were implemented in the SIGHT: framework and used to solve a set of ten relatively simple problems. In this paper, we comment on the general considerations for conducting method evaluation studies and report some initial results obtained to date. In particular, although the results are not conclusive because of the small initial test set, other formulations, optimality conditions, and sensitivity of solutions to various perturbations. Optimization algorithms are used to solve a particular MDO formulation. It is then appropriate to speak of local convergence rates and of global convergence properties of an optimization algorithm applied to a specific formulation. An analogous distinction exists in the field of partial differential equations. On the one hand, equations are analyzed in terms of regularity, well-posedness, and the existence and unique- ness of solutions. On the other, one considers numerous algorithms for solving differential equations. The area of MDO methods studies MDO formulations combined with optimization algorithms, although at times the distinction is blurred. It is important to

  6. Field evaluation of personal sampling methods for multiple bioaerosols.

    PubMed

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols. PMID:25799419

  7. Field Evaluation of Personal Sampling Methods for Multiple Bioaerosols

    PubMed Central

    Wang, Chi-Hsun; Chen, Bean T.; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols. PMID:25799419

  8. Retractions of the gingival margins evaluated by holographic methods

    NASA Astrophysics Data System (ADS)

    Sinescu, Cosmin; Negrutiu, Meda Lavinia; Manole, Marius; de Sabata, Aldo; Rusu, Laura-Cristina; Stratul, Stefan; Dudea, Diana; Dughir, Ciprian; Duma, Virgil-Florin

    2015-05-01

    The periodontal disease is one of the most common pathological states of the teeth and gums system. The issue is that its evaluation is a subjective one, i.e. it is based on the skills of the dental medical doctor. As for any clinical condition, a quantitative evaluation and monitoring in time of the retraction of the gingival margins is desired. This phenomenon was evaluated in this study with a holographic method by using a He-Ne laser with a power of 13 mW. The holographic system we have utilized - adapted for dentistry applications - is described. Several patients were considered in a comparative study of their state of health - regarding their oral cavity. The impressions of the maxillary dental arch were taken from a patient during his/her first visit and after a period of six months. The hologram of the first model was superposed on the model cast after the second visit. The retractions of the gingival margins could be thus evaluated three-dimensionally in every point of interest. An evaluation of the retraction has thus been made. Conclusions can thus be drawn for the clinical evaluation of the health of the teeth and gums system of each patient.

  9. Explosion protection methods for the power generation industry. Evaluating the hazard and reviewing explosion protection methods

    SciTech Connect

    Nixon, C.I.

    1998-07-01

    Handling carbonaceous fuels such as coal presents explosion hazards to the Power Generation Industry. This paper discusses the nature of explosions. It also provides a basis for hazard evaluation and discusses the various methods available for explosion protection. These methods include deflagration relief venting, deflagration suppression, deflagration isolation, containment and inerting. Process equipment protected by these methods include mills, cyclones, silos, hoppers and dust collectors.

  10. The topographic grain concept in DEM-based geomorphometric mapping

    NASA Astrophysics Data System (ADS)

    Józsa, Edina

    2016-04-01

    A common drawback of geomorphological analyses based on digital elevation datasets is the definition of search window size for the derivation of morphometric variables. The fixed-size neighbourhood determines the scale of the analysis and mapping, which can lead to the generalization of smaller surface details or the elimination of larger landform elements. The methods of DEM-based geomorphometric mapping are constantly developing into the direction of multi-scale landform delineation, but the optimal threshold for search window size is still a limiting factor. A possible way to determine the suitable value for the parameter is to consider the topographic grain principle (Wood, W. F. - Snell, J. B. 1960, Pike, R. J. et al. 1989). The calculation is implemented as a bash shell script for GRASS GIS to determine the optimal threshold for the r.geomorphon module. The approach relies on the potential of the topographic grain to detect the characteristic local ridgeline-to-channel spacing. By calculating the relative relief values with nested neighbourhood matrices it is possible to define a break-point where the increase rate of local relief encountered by the sample is significantly reducing. The geomorphons approach (Jasiewicz, J. - Stepinski, T. F. 2013) is a cell-based DEM classification method for the identification of landform elements at a broad range of scales by using line-of-sight technique. The landforms larger than the maximum lookup distance are broken down to smaller elements therefore the threshold needs to be set for a relatively large value. On the contrary, the computational requirements and the size of the study sites determine the upper limit for the value. Therefore the aim was to create a tool that would help to determine the optimal parameter for r.geomorphon tool. As a result it would be possible to produce more objective and consistent maps with achieving the full efficiency of this mapping technique. For the thorough analysis on the

  11. Multiresolution fusion of radar sounder and altimeter data for the generation of high resolution DEMs of ice sheets

    NASA Astrophysics Data System (ADS)

    Ilisei, Ana-Maria; Bruzzone, Lorenzo

    2015-10-01

    Understanding the dynamics and processes of the ice sheets is crucial for predicting the behavior of climate change. A potential approach to achieve this is by using high resolution (HR) digital elevation models (DEMs) of the ice surface derived from remote sensing radar or laser altimeters. Unfortunately, at present HR DEMs of large portions of the ice sheets are not available. To address this issue, in this paper we propose a multisensor data fusion technique for the generation of a HR DEM of the ice sheets, which fuses two types of data, i.e., radargrams acquired by radar sounder (RS) instruments and ice surface elevation data measured by altimeter (ALT) instruments. The aim of the technique is to generate a DEM of the ice surface at the best possible horizontal resolution by exploiting the complementary characteristics of the RS and ALT data. This is done by defining a novel processing scheme that involves image processing techniques based on data rescaling, geostatistical interpolation and multiresolution analysis (MRA). The method has been applied to a subset of RS and ALT data acquired over a portion of the Byrd Glacier in Antarctica. Experimental results confirm the effectiveness of the proposed method.

  12. SAR interferometry for DEM generation: wide-area error assessment

    NASA Astrophysics Data System (ADS)

    Carrasco, Daniel; Broquetas, Antoni; Pena, Ramon; Arbiol, Roman; Castillo, Manuel; Pala, Vincenc

    1998-11-01

    The present work consists on the generation of a DEM using ERS satellites interferometric data over a wide area (50 X 50 Km) with an error study using a high accuracy reference DEM, focusing on the atmosphere induced errors. The area is heterogeneous with flat and rough topography ranging from sea level up to 1200 m in the inland ranges. The ERS image has a 100 X 100 Km2 area and has been divided in four quarters to ease the processing. The phase unwrapping algorithm, which is a combination of region growing and least squares techniques, worked out successfully the rough topography areas. One quarter of the full scene was geocoded over a local datum ellipsoid to a UTM grid. The resulting DEM was compared to a reference one provided by the Institut Cartografic de Catalunya. Two types of atmospheric error or artifacts were found: a set of very localized spots, up to one phase cycle, which generated ghost hills up to 100, and a slow trend effect which added up to 50 m to some areas in the image. Besides of the atmospheric errors, the quality of the DEM was assessed. The quantitative error study was carried out locally at several areas with different topography.

  13. Spatial characterization of landscapes through multifractal analysis of DEM.

    PubMed

    Aguado, P L; Del Monte, J P; Moratiel, R; Tarquis, A M

    2014-01-01

    Landscape evolution is driven by abiotic, biotic, and anthropic factors. The interactions among these factors and their influence at different scales create a complex dynamic. Landscapes have been shown to exhibit numerous scaling laws, from Horton's laws to more sophisticated scaling of heights in topography and river network topology. This scaling and multiscaling analysis has the potential to characterise the landscape in terms of the statistical signature of the measure selected. The study zone is a matrix obtained from a digital elevation model (DEM) (map 10 × 10 m, and height 1 m) that corresponds to homogeneous region with respect to soil characteristics and climatology known as "Monte El Pardo" although the water level of a reservoir and the topography play a main role on its organization and evolution. We have investigated whether the multifractal analysis of a DEM shows common features that can be used to reveal the underlying patterns and information associated with the landscape of the DEM mapping and studied the influence of the water level of the reservoir on the applied analysis. The results show that the use of the multifractal approach with mean absolute gradient data is a useful tool for analysing the topography represented by the DEM. PMID:25177728

  14. Spatial Characterization of Landscapes through Multifractal Analysis of DEM

    PubMed Central

    Aguado, P. L.; Del Monte, J. P.; Moratiel, R.; Tarquis, A. M.

    2014-01-01

    Landscape evolution is driven by abiotic, biotic, and anthropic factors. The interactions among these factors and their influence at different scales create a complex dynamic. Landscapes have been shown to exhibit numerous scaling laws, from Horton's laws to more sophisticated scaling of heights in topography and river network topology. This scaling and multiscaling analysis has the potential to characterise the landscape in terms of the statistical signature of the measure selected. The study zone is a matrix obtained from a digital elevation model (DEM) (map 10 × 10 m, and height 1 m) that corresponds to homogeneous region with respect to soil characteristics and climatology known as “Monte El Pardo” although the water level of a reservoir and the topography play a main role on its organization and evolution. We have investigated whether the multifractal analysis of a DEM shows common features that can be used to reveal the underlying patterns and information associated with the landscape of the DEM mapping and studied the influence of the water level of the reservoir on the applied analysis. The results show that the use of the multifractal approach with mean absolute gradient data is a useful tool for analysing the topography represented by the DEM. PMID:25177728

  15. Using hybrid method to evaluate the green performance in uncertainty.

    PubMed

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed. PMID:20571885

  16. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  17. Evaluation of therapeutic pulmonary surfactants by thin liquid film methods.

    PubMed

    Todorov, Roumen; Exerowa, Dotchi; Platikanov, Dimo; Bianco, Federico; Razzetti, Roberta

    2015-08-01

    An example of the application of the Black Foam Film (BFF) Method and the Wetting Film Method, using the Microinterferomertric and the Pressure Balance Techniques, for characterization interfacial properties of the animal derived therapeutic pulmonary surfactant preparations (TSP), is presented. BFF thickness, probability of black film formation, and disjoining pressure for foam films from TSP aqueous solutions are measured as well as the wetting properties of TSP solutions on solid surfaces with different hydrophobicity have been studied. Interfacial characteristics such as minimal surfactant concentration to obtain black film (critical concentration) and concentration at which a black film is 100% obtained (threshold concentration) are determined. An evaluation of the four widely used TSP – Curosurf, Infasurf, Survanta, and Alveofact – by these methods has been carried out. Thus the methods of the thin liquid films are useful tools for studying the interfacial properties of TSP solutions, as well as for their improvement. PMID:25132222

  18. Evaluation of Polyesterimide Nanocomposites Using Methods of Thermal Analysis

    NASA Astrophysics Data System (ADS)

    Gornicka, B.; Gorecki, L.; Gryzlo, K.; Kaczmarek, D.; Wojcieszak, D.

    2016-02-01

    Polyesterimide resin applied for winding impregnation has been modified by incorporating the hydrophilic and hydrophobic nanosilica, montmorillonite and aluminium oxide. For assessment of the resins in liquid and cured states thermoanalytical methods TG/DSC were used. For pure and nanofilled resins the results of investigation of AFM topography, bond strength, dielectric strength and partial discharge resistance have been also presented. It was found that dielectric and mechanical properties of polyesterimide resin containing hydrophilic silica as well aluminium oxide were much improved as compared to pure resin. Based on our investigations we have found that the methods of thermal analysis may be very useful for evaluation of nanocomposites: DSC/TGA study of resins in the liquid state under dynamic conditions can be applied to pre-select nanocomposites; isothermal TG curves of cured resins can be utilized for thermal stability evaluation; in turn, TG study after thermal ageing of cured resins could confirm the barrier properties of nanocomposites.

  19. Evaluation methods in competitive bidding for electric power

    SciTech Connect

    Kahn, E.P.; Goldman, C.A.; Stoft, S.; Berman, D.

    1989-06-01

    Private suppliers of electricity that participate in competitive auctions must specify a multi-year price trajectory. It may be desirable for some bidders to propose prices that exceed the buyer's estimate of value in the short run, but still offer substantial long run benefits. Prices of this kind are called front loaded.'' From the buyer's perspective, bids which are front loaded are somewhat undesirable. They impose upon the buyer the risk that the long run benefits will not materialize if the supplier terminates delivery pre-maturely. In this appendix, we develop a model to evaluate front-loaded bids, which we call the implicit loan (IL) method. In this model, we view front-loading to be a loan from buyer to seller. This notion, which has been suggested informally in the past, is formalized and made the basis for an explicit analytic evaluation method.

  20. A Method for Evaluating Volt-VAR Optimization Field Demonstrations

    SciTech Connect

    Schneider, Kevin P.; Weaver, T. F.

    2014-08-31

    In a regulated business environment a utility must be able to validate that deployed technologies provide quantifiable benefits to the end-use customers. For traditional technologies there are well established procedures for determining what benefits will be derived from the deployment. But for many emerging technologies procedures for determining benefits are less clear and completely absent in some cases. Volt-VAR Optimization is a technology that is being deployed across the nation, but there are still numerous discussions about potential benefits and how they are achieved. This paper will present a method for the evaluation, and quantification of benefits, for field deployments of Volt-VAR Optimization technologies. In addition to the basic methodology, the paper will present a summary of results, and observations, from two separate Volt-VAR Optimization field evaluations using the proposed method.

  1. Public health surveillance: historical origins, methods and evaluation.

    PubMed Central

    Declich, S.; Carter, A. O.

    1994-01-01

    In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented. PMID:8205649

  2. Evaluation of preferred lightness rescaling methods for colour reproduction

    NASA Astrophysics Data System (ADS)

    Chang, Yerin

    2012-01-01

    In cross-media colour reproduction, it is common goal achieving media-relative reproduction. From the ICC specification, this often accomplished by linearly scaling XYZ data so that the media white of the source data matches that of the destination data. However, in this approach the media black points are not explicitly aligned. To compensate this problem, it is common to apply a black point compensation (BPC) procedure to improve the mapping of the black points. First, three lightness rescaling methods were chosen: linear, sigmoidal and spline. CIECAM02 was also implemented in an approach of a lightness rescaling method; simply, lightness values from results produced by CIECAM02 handle as if reproduced lightness values of an output image. With a chosen image set, above five different methods were implemented. A paired-comparison psychophysical experiment was performed to evaluate performances of the lightness rescaling methods. In most images, the Adobe's BPC, linear and Spline lightness rescaling methods are preferred over the CIECAM02 and sigmoidal lightness rescaling methods. The confidence interval for the single image set is +/-0.36. With this confidence interval, it is difficult to conclude the Adobe BPC' method works better, but not significantly so. However, for the overall results, as every single observation is independent to each other, the result was presented with the confidence interval of +/-0.0763. Based on the overall result, the Adobe's BPC method performs best.

  3. High resonant mass sensor evaluation: An effective method

    SciTech Connect

    Tseytlin, Yakov M.

    2005-11-15

    Micro- and nanocantilever mass sensors in higher resonant oscillation modes are very sensitive to an additional mass, which is positioned along their length except for the nodal points. However, the known evaluation methods of this sensitivity are usually complicated, which is not effective in applications for atomic force microscopy. Our solution is simple, unified, and based on the superposition force method, which allows us to estimate effective spring constants, effective mass factors, and correct prediction of sensitivity for nano- and microcantilevers along their length in higher resonant modes. Simple analytical and computer aided calculation algorithms are developed. Calculation results are close to the experimental and computer simulation data within a few percent.

  4. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  5. Roles and methods of performance evaluation of hospital academic leadership.

    PubMed

    Zhou, Ying; Yuan, Huikang; Li, Yang; Zhao, Xia; Yi, Lihua

    2016-01-01

    The rapidly advancing implementation of public hospital reform urgently requires the identification and classification of a pool of exceptional medical specialists, corresponding with incentives to attract and retain them, providing a nucleus of distinguished expertise to ensure public hospital preeminence. This paper examines the significance of academic leadership, from a strategic management perspective, including various tools, methods and mechanisms used in the theory and practice of performance evaluation, and employed in the selection, training and appointment of academic leaders. Objective methods of assessing leadership performance are also provided for reference. PMID:27061556

  6. Evaluation of Cleanliness Test Methods for Spacecraft PCB Assemblies

    NASA Astrophysics Data System (ADS)

    Tegehall, P.-E.; Dunn, B. D.

    2006-10-01

    Ionic contamination on printed-circuit-board assemblies may cause current leakage and short-circuits. The present cleanliness requirement in ECSS-Q-70-08, "The manual soldering of high-reliability electrical connections", is that the ionic contamination shall be less than 1.56 fl-glcm2 NaCI equivalents. The relevance of the method used for measurement of the ionic contamination level, resistivity of solvent extract, has been questioned. Alternative methods are ion chromatography and measurement of surface insulation resistance, but these methods also have their drawbacks. These methods are first described and their advantages and drawbacks are discussed. This is followed by an experimental evaluation of the three methods. This was done by soldering test vehicles at four manufacturers of space electronics using their ordinary processes for soldering and cleaning printed board assemblies. The experimental evaluation showed that the ionic contamination added by the four assemblers was very small and well below the acceptance criterion in ECSS-Q-70-80. Ion-chromatography analysis showed that most of the ionic contamination on the cleaned assembled boards originated from the hot-oil fusing of the printed circuit boards. Also, the surface insulation resistance was higher on the assembled boards compared to the bare printed circuit boards. Since strongly activated fluxes are normally used when printed circuit boards are hot-oil fused, it is essential that they are thoroughly cleaned in order to achieve low contamination levels on the final printed-board assemblies.

  7. A method of thymic perfusion and its evaluation

    PubMed Central

    Ekwueme, O.

    1973-01-01

    The development and evaluation of a method of isolated ex vivo perfusion of the rabbit thymus using diluted autologous blood is described. The data indicate that the viability of the preparation is maintained at a satisfactory level during the period of perfusion. These results suggest that the isolated perfused thymus would be a useful new approach to studies of thymus function. ImagesFig. 2Fig. 8Fig. 9Fig. 10Fig. 11 PMID:4747584

  8. An analysis method for evaluating gradient-index fibers based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Yoshida, S.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.

    2011-05-01

    We propose a numerical analysis method for evaluating gradient-index (GRIN) optical fiber using the Monte Carlo method. GRIN optical fibers are widely used in optical information processing and communication applications, such as an image scanner, fax machine, optical sensor, and so on. An important factor which decides the performance of GRIN optical fiber is modulation transfer function (MTF). The MTF of a fiber is swayed by condition of manufacturing process such as temperature. Actual measurements of the MTF of a GRIN optical fiber using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. But the results differ greatly from those by experiment. In this paper, we explain the manufacturing process which affects the performance of GRIN optical fibers and a new evaluation method similar to the experimental system based on the Monte Carlo method. We verified that it more closely matches the experimental results than the conventional method.

  9. Comparison of methods for quantitative evaluation of endoscopic distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Castro, Kurt; Desai, Viraj N.; Cheng, Wei-Chung; Pfefer, Joshua

    2015-03-01

    Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope's image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.

  10. SBS vs Inhouse Recycling Methods-An Invitro Evaluation

    PubMed Central

    Verma, Jaya Krishanan; Arun; Sundari, Shanta; Chandrasekhar, Shyamala; Kumar, Aravind

    2015-01-01

    Introduction In today’s world of economic crisis it is not feasible for an orthodontist to replace each and every debonded bracket with a new bracket- quest for an alternative thrives Orthodontist. The concept of recycling bracket for its reuse has evolved over a period of time. Orthodontist can send the brackets to various commercial recycling companies for recycling, but it’s impractical as these are complex procedures and require time and usage of a new bracket would seem more feasible. Thereby, in-house methods have been developed. The aim of the study was to determine the SBS (Shear Bond Strength) and to compare, evaluate the efficiency of in house recycling methods with that of the SBS of new brackets. Materials and Methods Five in–house-recycling procedures-Adhesive Grinding Method, Sandblasting Method, Thermal Flaming Method, Buchman method and Acid Bath Method were used in the present study. Initial part of the study included the use of UV/Vis spectrophotometer where in the absorption level of base of new stainless steel bracket is compared with the base of a recycled bracket. The difference seen in the UV absorbance can be attributed to the presence of adhesive remnant. For each recycling procedure the difference in UV absorption is calculated. New stainless steel brackets and recycled brackets were tested for its shear bond strength with Instron testing machine. Comparisons were made between shear bond strength of new brackets with that of recycled brackets. The last part of the study involved correlating the findings of UV/Vis spectrophotometer with the shear bond strength for each recycling procedure. Results Among the recycled brackets the Sandblasting technique showed the highest shear bond strength (19.789MPa) and the least was shown by the Adhesive Grinding method (13.809MPa). Conclusion The study concludes that sand blasting can be an effective choice among the 5 in house methods of recycling methods. PMID:26501002

  11. Best estimate method versus evaluation method: a comparison of two techniques in evaluating seismic analysis and design. Technical report

    SciTech Connect

    Bumpus, S.E.; Johnson, J.J.; Smith, P.D.

    1980-07-01

    The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the tradditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC)--seismic input, soil-structure interaction, major structural response, and subsystem response--are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on the model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evauation Method is also demonstrated.

  12. Evaluation of Five Decontamination Methods for Filtering Facepiece Respirators

    PubMed Central

    Bergman, Michael S.; Eimer, Benjamin C.; Shaffer, Ronald E.

    2009-01-01

    Concerns have been raised regarding the availability of National Institute for Occupational Safety and Health (NIOSH)-certified N95 filtering facepiece respirators (FFRs) during an influenza pandemic. One possible strategy to mitigate a respirator shortage is to reuse FFRs following a biological decontamination process to render infectious material on the FFR inactive. However, little data exist on the effects of decontamination methods on respirator integrity and performance. This study evaluated five decontamination methods [ultraviolet germicidal irradiation (UVGI), ethylene oxide, vaporized hydrogen peroxide (VHP), microwave oven irradiation, and bleach] using nine models of NIOSH-certified respirators (three models each of N95 FFRs, surgical N95 respirators, and P100 FFRs) to determine which methods should be considered for future research studies. Following treatment by each decontamination method, the FFRs were evaluated for changes in physical appearance, odor, and laboratory performance (filter aerosol penetration and filter airflow resistance). Additional experiments (dry heat laboratory oven exposures, off-gassing, and FFR hydrophobicity) were subsequently conducted to better understand material properties and possible health risks to the respirator user following decontamination. However, this study did not assess the efficiency of the decontamination methods to inactivate viable microorganisms. Microwave oven irradiation melted samples from two FFR models. The remainder of the FFR samples that had been decontaminated had expected levels of filter aerosol penetration and filter airflow resistance. The scent of bleach remained noticeable following overnight drying and low levels of chlorine gas were found to off-gas from bleach-decontaminated FFRs when rehydrated with deionized water. UVGI, ethylene oxide (EtO), and VHP were found to be the most promising decontamination methods; however, concerns remain about the throughput capabilities for EtO and VHP

  13. Flight-Test Evaluation of Flutter-Prediction Methods

    NASA Technical Reports Server (NTRS)

    Lind, RIck; Brenner, Marty

    2003-01-01

    The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.

  14. [Evaluation in the health sector: concepts and methods].

    PubMed

    Contandriopoulos, A P; Champagne, F; Denis, J L; Avargues, M C

    2000-12-01

    what is good and right). Did the intervention correspond to what should have been done according to the standards utilized? Evaluative research aims to employ valid scientific methods to analyze relationships between different components of an intervention. More specifically, evaluation research can be classified into six types of analysis, which employ different research strategies. Strategic analysis allows appreciation of the pertinence of an intervention; logical analysis, the soundness of the theoretical and operational bases of the intervention; productivity analysis, the technical efficiency with which resources are mobilized to produce goods or services; analysis of effects, effectiveness of goods and services in producing results; efficiency analysis, relations between the costs of the resources (or the services) used and the results; implementation analysis, appreciation of interactions between the process of the intervention and the context of implementation in the production of effects. The official finalities of all evaluation processes are of four types: (1)strategic, to aid the planning and development of an intervention, (2) formative, to supply information to improve an intervention in progress, (3) summative, to determine the effects of an intervention (to decide if it should be maintained, transformed or suspended), (4) fundamental, to contribute to the advancement of empirical and theoretical knowledge regarding the intervention. In addition, experience acquired in the field of evaluation suggests that evaluation is also productive in that it allows actors, in an organized setting, to reconsider the links between the objectives given, practices developed and their context of action. This task of achieving coherence is continuous and is one of the intrinsic conditions of action in an organized setting. In this perspective, evaluation can have a key role, given that it is not employed to legitimize new forms of control but rather to favor debate and

  15. DEM microfabrication technique and its applications in bioscience and microfluidic systems

    NASA Astrophysics Data System (ADS)

    Chen, Di; Yang, Fan; Tang, Min; Li, Yigui; Zhu, Jun; Zhang, Dacheng

    2001-10-01

    A new LIGA-like microfabrication technique was developed by present authors. DEM (deepetching, electroforming and microreplication) is the abbreviation of three main process steps in this new microfabrication technique. In contrast to LIGA technique, DEM technique has the advantages of lower cost and shorter process period. Microfluidic systems like plastic capillary electrophoresis chips, micro flowmeters and three-dimensional DNA chips were developed using DEM technique. DEM technique offers a new way for fabrication of MEMS and MOEMS components.

  16. Evaluation of Methods to Estimate Understory Fruit Biomass

    PubMed Central

    Lashley, Marcus A.; Thompson, Jeffrey R.; Chitwood, M. Colter; DePerno, Christopher S.; Moorman, Christopher E.

    2014-01-01

    Fleshy fruit is consumed by many wildlife species and is a critical component of forest ecosystems. Because fruit production may change quickly during forest succession, frequent monitoring of fruit biomass may be needed to better understand shifts in wildlife habitat quality. Yet, designing a fruit sampling protocol that is executable on a frequent basis may be difficult, and knowledge of accuracy within monitoring protocols is lacking. We evaluated the accuracy and efficiency of 3 methods to estimate understory fruit biomass (Fruit Count, Stem Density, and Plant Coverage). The Fruit Count method requires visual counts of fruit to estimate fruit biomass. The Stem Density method uses counts of all stems of fruit producing species to estimate fruit biomass. The Plant Coverage method uses land coverage of fruit producing species to estimate fruit biomass. Using linear regression models under a censored-normal distribution, we determined the Fruit Count and Stem Density methods could accurately estimate fruit biomass; however, when comparing AIC values between models, the Fruit Count method was the superior method for estimating fruit biomass. After determining that Fruit Count was the superior method to accurately estimate fruit biomass, we conducted additional analyses to determine the sampling intensity (i.e., percentage of area) necessary to accurately estimate fruit biomass. The Fruit Count method accurately estimated fruit biomass at a 0.8% sampling intensity. In some cases, sampling 0.8% of an area may not be feasible. In these cases, we suggest sampling understory fruit production with the Fruit Count method at the greatest feasible sampling intensity, which could be valuable to assess annual fluctuations in fruit production. PMID:24819253

  17. Lunar-base construction equipment and methods evaluation

    NASA Technical Reports Server (NTRS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-01-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  18. Methods of compliance evaluation for ocean outfall design and analysis.

    PubMed

    Mukhtasor; Lye, L M; Sharp, J J

    2002-10-01

    Sewage discharge from an ocean outfall is subject to water quality standards, which are often stated in probabilistic terms. Monte Carlo simulation (MCS) has been used in the past to evaluate the ability of a designed outfall to meet water quality standards or compliance guidelines associated with sewage discharges. In this study, simpler and less computer-intensive probabilistic methods are considered. The probabilistic methods evaluated are the popular mean first-order second-moment (MFOSM) and the advance first-order second-moment (AFOSM) methods. Available data from the Spaniard's Bay Outfall located on the east coast of New-foundland, Canada, were used as inputs for a case study. Both methods were compared with results given by MCS. It was found that AFOSM gave a good approximation of the failure probability for total coliform concentration at points remote from the outfall. However, MFOSM was found to be better when considering only the initial dilutions between the discharge point and the surface. Reasons for the different results may be the difference in complexity of the performance function in both cases. This study does not recommend the use of AFOSM for failure analysis in ocean outfall design and analysis because the analysis requires computational efforts similar to MCS. With the advancement of computer technology, simulation techniques, available software, and its flexibility in handling complex situations, MCS is still the best choice for failure analysis of ocean outfalls when data or estimates on the parameters involved are available or can be assumed. PMID:12481920

  19. Coastal Digital Elevation Models (DEMs) for tsunami hazard assessment on the French coasts

    NASA Astrophysics Data System (ADS)

    Maspataud, Aurélie; Biscara, Laurie; Hébert, Hélène; Schmitt, Thierry; Créach, Ronan

    2015-04-01

    bathymetric and topographic data, …) were gathered. Consequently, datasets were first assessed internally for both quality and accuracy and then externally with other to ensure consistency and gradual topographic/bathymetric transitioning along limits of the datasets. The heterogeneous ages of the input data also stress the importance of taking into account the temporal variability of bathymetric features, especially in the active areas (sandbanks, estuaries, channels). Locally, gaps between marine (hydrographic surveys) and terrestrial (topographic LIDAR) data have required the introduction of new methods and tools to solve interpolation. Through these activities the goal is to improve the production line and to enhance tools and procedures used for the improvement of processing, validation and qualification algorithms of bathymetric data, data collection work, automation of processing and integration process for conception of improved both bathymetric and topographic DEMs, merging data collected. This work is supported by a French ANR program in the frame of "Investissements d'Avenir", under the grant ANR-11-RSNR-00023-01.

  20. A practical method to evaluate radiofrequency exposure of mast workers.

    PubMed

    Alanko, Tommi; Hietanen, Maila

    2008-01-01

    Assessment of occupational exposure to radiofrequency (RF) fields in telecommunication transmitter masts is a challenging task. For conventional field strength measurements using manually operated instruments, it is difficult to document the locations of measurements while climbing up a mast. Logging RF dosemeters worn by the workers, on the other hand, do not give any information about the location of the exposure. In this study, a practical method was developed and applied to assess mast workers' exposure to RF fields and the corresponding location. This method uses a logging dosemeter for personal RF exposure evaluation and two logging barometers to determine the corresponding height of the worker's position on the mast. The procedure is not intended to be used for compliance assessments, but to indicate locations where stricter assessments are needed. The applicability of the method is demonstrated by making measurements in a TV and radio transmitting mast. PMID:19054796

  1. Evaluation of Methods to Predict Reactivity of Gold Nanoparticles

    SciTech Connect

    Allison, Thomas C.; Tong, Yu ye J.

    2011-06-20

    Several methods have appeared in the literature for predicting reactivity on metallic surfaces and on the surface of metallic nanoparticles. All of these methods have some relationship to the concept of frontier molecular orbital theory. The d-band theory of Hammer and Nørskov is perhaps the most widely used predictor of reactivity on metallic surfaces, and it has been successfully applied in many cases. Use of the Fukui function and the condensed Fukui function is well established in organic chemistry, but has not been so widely applied in predicting the reactivity of metallic nanoclusters. In this article, we will evaluate the usefulness of the condensed Fukui function in predicting the reactivity of a family of cubo-octahedral gold nanoparticles and make comparison with the d-band method.

  2. Analysis of Photovoltaic System Energy Performance Evaluation Method

    SciTech Connect

    Kurtz, S.; Newmiller, J.; Kimber, A.; Flottemesch, R.; Riley, E.; Dierauf, T.; McKee, J.; Krishnani, P.

    2013-11-01

    Documentation of the energy yield of a large photovoltaic (PV) system over a substantial period can be useful to measure a performance guarantee, as an assessment of the health of the system, for verification of a performance model to then be applied to a new system, or for a variety of other purposes. Although the measurement of this performance metric might appear to be straight forward, there are a number of subtleties associated with variations in weather and imperfect data collection that complicate the determination and data analysis. A performance assessment is most valuable when it is completed with a very low uncertainty and when the subtleties are systematically addressed, yet currently no standard exists to guide this process. This report summarizes a draft methodology for an Energy Performance Evaluation Method, the philosophy behind the draft method, and the lessons that were learned by implementing the method.

  3. Evaluation of Two Fractal Methods for Magnetogram Image Analysis

    NASA Technical Reports Server (NTRS)

    Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.

    1997-01-01

    Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.

  4. Statistical method of evaluation of flip-flop dynamical parameters

    NASA Astrophysics Data System (ADS)

    Wieczorek, P. Z.; Opalski, L. J.

    2008-01-01

    This paper presents statistical algorithm and measurement system for precise evaluation of flip-flop dynamical parameters in asynchronous operation. The analyzed flip-flop parameters are failure probability, MTBF and propagation delay. It is shown how these parameters depend on metastable operation of flip-flops. The numerical and hardware solutions shown in article allow for precise and reliable comparison of flip-flops. Also the analysis of influence of flip-flop electrical parameters of flip-flop electrical parameters on their metastable operation is possible with use of presented statistical method. Statistical estimation of parameters of flip-flops in which metastability occurs, seems to be more reliable than standard empirical methods of flip-flop analysis. Presented method allows for showing inaccuracies in theoretical model of metastability.

  5. Bayesian Monte Carlo method for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  6. Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment

    NASA Technical Reports Server (NTRS)

    Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.

    1979-01-01

    The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.

  7. Evaluation of Alternate Stainless Steel Surface Passivation Methods

    SciTech Connect

    Clark, Elliot A.

    2005-05-31

    Stainless steel containers were assembled from parts passivated by four commercial vendors using three passivation methods. The performance of these containers in storing hydrogen isotope mixtures was evaluated by monitoring the composition of initially 50% H{sub 2} 50% D{sub 2} gas with time using mass spectroscopy. Commercial passivation by electropolishing appears to result in surfaces that do not catalyze hydrogen isotope exchange. This method of surface passivation shows promise for tritium service, and should be studied further and considered for use. On the other hand, nitric acid passivation and citric acid passivation may not result in surfaces that do not catalyze the isotope exchange reaction H{sub 2} + D{sub 2} {yields} 2HD. These methods should not be considered to replace the proprietary passivation processes of the two current vendors used at the Savannah River Site Tritium Facility.

  8. The effect of DEM resolution on the computation of the factor of safety using an infinite slope model

    NASA Astrophysics Data System (ADS)

    Fuchs, Michael; Torizin, Jewgenij; Kühn, Friedrich

    2014-11-01

    The quality of digital elevation models (DEMs) is essential for reliable landslide susceptibility assessments. In this paper, two DEMs derived from ASTER (ASTER GDEM v.2 with 30 m horizontal resolution) and TerraSAR-X (GeoElevation10 with 10 m horizontal resolution) data are compared to study the effects of resolution on the derived slope and wetness index parameters in the application of the infinite slope model for the computation of the factor of safety. Several slope stability scenarios representing different wetness conditions with 5, 10 and 100 mm d- 1 of steady-state recharge were calculated for the eastern flank of Mount Rinjani Volcano on Lombok Island, Indonesia. Each scenario was conducted by computing the static factor of safety with mean values of the bulk density, angle of internal friction, cohesion, and failure depth parameters, as well as for their normally distributed values by Monte Carlo simulation. All scenarios were applied to both DEMs. The scenarios were evaluated by calculating the success/prediction rate using the respective area under the curve (AUC) and an existing landslide inventory. Additionally, uncertainties in the estimated positions of landslides were taken into account. Depending on the particular scenario, the success rate of the GeoElevation10 model shows differences up to 3% compared to the ASTER GDEM model. This apparent improvement is mainly caused by the higher ground resolution in GeoElevation10. However, the success rate increases for the 10 mm d- 1 and decreases for the 100 mm d- 1 steady-state recharge conditions. Consequently, the more detailed flow direction in the GeoElevation10 DEM has the highest impact under conditions with lower water saturation. The slight improvement in the total model quality shows that the higher resolution of the DEM has a small impact on poorly parameterized models, in which the material properties are described by roughly estimated parameters. Therefore, the application of a high

  9. Childhood Obesity Research Demonstration Project: Cross-Site Evaluation Methods

    PubMed Central

    Lee, Rebecca E.; Mehta, Paras; Thompson, Debbe; Bhargava, Alok; Carlson, Coleen; Kao, Dennis; Layne, Charles S.; Ledoux, Tracey; O'Connor, Teresia; Rifai, Hanadi; Gulley, Lauren; Hallett, Allen M.; Kudia, Ousswa; Joseph, Sitara; Modelska, Maria; Ortega, Dana; Parker, Nathan; Stevens, Andria

    2015-01-01

    Abstract Introduction: The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which the CORD model is associated with changes in behavior, body weight, BMI, quality of life, and healthcare satisfaction in children 2–12 years of age. Design/Methods: The CORD Evaluation Center (EC-CORD) will analyze the pooled data from three independent demonstration projects that each integrate public health and primary care childhood obesity interventions. An extensive set of common measures at the family, facility, and community levels were defined by consensus among the CORD projects and EC-CORD. Process evaluation will assess reach, dose delivered, and fidelity of intervention components. Impact evaluation will use a mixed linear models approach to account for heterogeneity among project-site populations and interventions. Sustainability evaluation will assess the potential for replicability, continuation of benefits beyond the funding period, institutionalization of the intervention activities, and community capacity to support ongoing program delivery. Finally, cost analyses will assess how much benefit can potentially be gained per dollar invested in programs based on the CORD model. Conclusions: The keys to combining and analyzing data across multiple projects include the CORD model framework and common measures for the behavioral and health outcomes along with important covariates at the individual, setting, and community levels. The overall objective of the comprehensive evaluation will develop evidence-based recommendations for replicating and disseminating community-wide, integrated public health and primary care programs based on the CORD model. PMID:25679060

  10. Evaluation of Perrhenate Spectrophotometric Methods in Bicarbonate and Nitrate Media.

    PubMed

    Lenell, Brian A; Arai, Yuji

    2016-04-01

    2-pyridyl thiourea and methyl-2-pyridyl ketoxime based perrhenate, Re(VII), UV-vis spectrophotometric methods were evaluated in nitrate and bicarbonate solutions ranging from 0.001 M to 0.5 M. Standard curves at [Re]=2.5-50 mg L(-1) for the Re(IV)-thiourea and the Re ketoxime complexes were constructed at 405 nm and 490 nm, respectively. Detection of limits for N-(2-pyridyl) thiourea and methyl-2-pyridyl ketoxime methods in ultrapure water are 3.06 mg/L and 4.03 mg/L, respectively. Influences of NaHCO3 and NaNO3 concentration on absorbance spectra, absorptivity, and linearity were documented. For both methods, samples in ultrapure water and NaHCO3 have an R(2) value>0.99, indicating strong linear relationships. Statistical analysis supports that NaHCO3 does not affect linearity between standards for either method. NaNO3 causes major interference with the ketoxime method above 0.001 M NaNO3. Data provides information for practical use of Re spectrophotometric methods in environmental media that is high in bicarbonate and nitrate. PMID:26838460

  11. A method to evaluate hydraulic fracture using proppant detection.

    PubMed

    Liu, Juntao; Zhang, Feng; Gardner, Robin P; Hou, Guojing; Zhang, Quanying; Li, Hu

    2015-11-01

    Accurate determination of the proppant placement and propped fracture height are important for evaluating and optimizing stimulation strategies. A technology using non-radioactive proppant and a pulsed neutron gamma energy spectra logging tool to determine the placement and height of propped fractures is proposed. Gd2O3 was incorporated into ceramic proppant and a Monte Carlo method was utilized to build the logging tools and formation models. Characteristic responses of the recorded information of different logging tools to fracture widths, proppant concentrations and influencing factors were studied. The results show that Gd capture gamma rays can be used to evaluate propped fractures and it has higher sensitivity to the change of fracture width and traceable proppant content compared with the exiting non-radioactive proppant evaluation techniques and only an after-fracture measurement is needed for the new method; The changes in gas saturation and borehole size have a great impact on determining propped fractures when compensated neutron and pulsed neutron capture tool are used. A field example is presented to validate the application of the new technique. PMID:26296059

  12. Research on the Comparability of Multi-attribute Evaluation Methods for Academic Journals

    NASA Astrophysics Data System (ADS)

    Liping, Yu

    This paper first constructs a classification framework for multi-attribute evaluation methods oriented to academic journals, and then discusses the comparability of the vast majority of non-linear evaluation methods and the majority of linear evaluation methods theoretically, taking the TOPSIS method as an example and the evaluation data on agricultural journals as an exercise of validation. The analysis result shows that we should attach enough importance to the comparability of evaluation methods for academic journals; the evaluation objectives are closely related to the choice of evaluation methods, and also relevant to the comparability of evaluation methods; the specialized organizations for journal evaluation had better release the evaluation data, evaluation methods and evaluation results to the best of their abilities; only purely subjective evaluation method is of broad comparability.

  13. Review and evaluation of metallic TRU nuclear waste consolidation methods

    SciTech Connect

    Montgomery, D.R.; Nesbitt, J.F.

    1983-08-01

    The US Department of Energy established the Commercial Waste Treatment Program to develop, demonstrate, and deploy waste treatment technology. In this report, viable methods are identified that could consolidate the volume of metallic wastes generated in a fuel reprocessing facility. The purpose of this study is to identify, evaluate, and rate processes that have been or could be used to reduce the volume of contaminated/irradiated metallic waste streams and to produce an acceptable waste form in a safe and cost-effective process. A technical comparative evaluation of various consolidation processes was conducted, and these processes were rated as to the feasibility and cost of producing a viable product from a remotely operated radioactive process facility. Out of the wide variety of melting concepts and consolidation systems that might be applicable for consolidating metallic nuclear wastes, the following processes were selected for evaluation: inductoslay melting, rotating nonconsumable electrode melting, plasma arc melting, electroslag melting with two nonconsumable electrodes, vacuum coreless induction melting, and cold compaction. Each process was evaluated and rated on the criteria of complexity of process, state and type of development required, safety, process requirements, and facility requirements. It was concluded that the vacuum coreless induction melting process is the most viable process to consolidate nuclear metallic wastes. 11 references.

  14. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    DOEpatents

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  15. Spectrophotometric methods for the evaluation of acidity constants-I Numerical methods for single equilibria.

    PubMed

    Asuero, A G; Navas, M J; Jiminez-Trillo, J L

    1986-02-01

    The spectrophotometric methods applicable to the numerical evaluation of acidity constants of monobasic acids are briefly reviewed. The equations are presented in a form suitable for easy calculation with a programmable pocket calculator. The aim of this paper is to cover a gap in the education analytical literature. PMID:18964064

  16. New aspects for the evaluation of radioactive waste disposal methods

    SciTech Connect

    Seiler, F.A.; Alvarez, J.L.

    1996-12-31

    For the performance assessment of radioactive and hazardous waste disposal sites, risk assessments are usually performed for the long term, i.e., over an interval in space and time for which one can predict movement and behavior of toxic agents in the environment. This approach is based on at least three implicit assumptions: One, that the engineering layout will take care of the immediate endangerment of potential receptors; two, that one has carefully evaluated just how far out in space and time the models can be extrapolated, and three, that one can evaluate potential health effects for very low exposures. A few of these aspects will be discussed here in the framework of the scientific method.

  17. In vivo method for the evaluation of catheter thrombogenicity

    SciTech Connect

    Solomon, D.D.; Arnold, W.L.; Martin, N.D.; Lentz, D.J.

    1987-01-01

    A new method has been developed to evaluate the relative thrombogenicity of vascular catheters. The technique provides a means to quantitatively differentiate between catheters made from different polymeric materials. Autologous In-111 labeled platelets were infused into a dog model and catheters were then inserted into the external jugular vein of the dog. The neck region was scanned using gamma camera imaging. Comparisons between catheter materials were made using computer generated uptake slopes during the first 40 min of the scan. In addition to scintigraphy, visual assessment of thrombus deposition, thrombus weight, platelet deposition, and scanning electron microscopy were used to validate the technique. Poly(vinyl chloride), polyurethane, heparinized polyurethane, and silicone catheter materials were tested. It was found that heparinized polyurethane was the least thrombogenic of all materials evaluated.

  18. Evaluation of a proposed method for representing drug terminology.

    PubMed

    Cimino, J J; McNamara, T J; Meredith, T; Broverman, C A; Eckert, K C; Moore, M; Tyree, D J

    1999-01-01

    In the absence of a single, standard, multipurpose terminology for representing medications, the HL7 Vocabulary Technical Committee has sought to develop a model for such terms in a way that will provide a unified method for representing them and supporting interoperability among various terminology systems. We evaluated the preliminary model by obtaining terms, represented in our model, from three leading vendors of pharmacy system knowledge bases. A total of 2303 terms were obtained, and 3982 pair-wise comparisons were possible. We found that the components of the term descriptions matched 68-87% of the time and that the overall descriptions matched 53% of the time. The evaluation has identified a number of areas in the model where more rigorous definitions will be needed in order to improve the matching rate. This paper discusses the implications of these results. PMID:10566318

  19. Helicobacter pylori identification: a diagnostic/confirmatory method for evaluation.

    PubMed

    Mesquita, B; Gonçalves, M J; Pacheco, P; Lopes, J; Salazar, F; Relvas, M; Coelho, C; Pacheco, J J; Velazco, C

    2014-09-01

    The Helicobacter pylori extra gastric reservoir is probably the oral cavity. In order to evaluate the presence of this bacterium in patients with periodontitis and suspicious microbial cultures, saliva was collected from these and non-periodontitis subjects. PCRs targeting 16S rRNA gene and a 860 bp specific region were performed, and digested with the restriction enzyme DdeI. We observed that the PCR-RFLP approach augments the accuracy from 26.2 % (16/61), found in the PCR-based results, to 42.6 % (26/61), which is an excellent indicator for the establishment of this low-cost procedure as a diagnostic/confirmatory method for H. pylori evaluation. PMID:24715050

  20. Evaluation of videodisc modules: a mixed method approach.

    PubMed Central

    Parkhurst, P. E.; Lovell, K. L.; Sprafka, S. A.; Hodgins, M.

    1991-01-01

    The purpose of this study was to evaluate the design and implementation of 10 neuropathology interactive videodisc instructional (IVI) modules used by Michigan State University medical students in the College of Osteopathic Medicine and the College of Human Medicine. The evaluation strategy incorporated a mixed method approach using qualitative and quantitative data to examine levels of student acceptance for the modules; ways in which IVI modules accommodate different learner styles; and to what extent the modules facilitate the attainment of higher level learning objectives. Students rated the units highly for learning effectiveness; many students reported group interaction as beneficial; and students expressed a desire for more IVI in the curriculum. The paper concludes with recommendations for future use of interactive videodisc technology in the teaching/learning process. PMID:1807704

  1. [Possible methods for evaluating bone density in the maxillofacial region].

    PubMed

    Koppány, Ferenc; Joób-Fancsaly, Arpád; Szabo, György

    2007-04-01

    Bone densitometry is a commonly used procedure in general medicine to measure the mineral content of the bone. The method helps in establishing an early diagnosis of metabolic diseases of the bone (especially osteoporosis), which decreases the incidence of pathological fractures in a high degree. Recent studies have shown that significant correlation can be found between the optical densitometric evaluations of the jaws and the densitometric figures of other bones of the skeleton (spine, hip). These results point out the possible role of the dentist in the early diagnosis of osteoporosis. The recent methods in general medicine are based on the measurement of photon and x-ray absorption followed by computerized analysis (single photon absorptiometry, single energy x-ray absorptiometry, dual photon absorptiometry, dual energy x-ray absorptiometry). Besides the previously mentioned techniques ultrasound attenuation detection (quantitative ultrasound) and computed tomographic approaches are also widely spread. Methods utilizing the developed panoramic x-ray films are also being used for densitometric evaluations. The results given by these measurements seem to be promising as a unique detection of the early signs of osteoporosis. PMID:17546899

  2. An evaluation study of EPA Method 8. Final report

    SciTech Connect

    Knoll, J.E.; Midgett, M.R.

    1980-03-01

    Techniques used in EPA Method 8, the source test method for acid mist and sulfur dioxide emissions from sulfuric acid plants, have been evaluated. Evidence is shown that trace amounts of peroxides in isopropyl alcohol result in the conversion of sulfur dioxide to sulfate and cause positive errors in acid mist values. Methods for measuring and purifying IPA are described. No conversion of sulfur dioxide to sulfate on filters or filter supports were observed. Collection efficiencies of train components are described and two alternate indicators are evaluated. Solid ammonium sulfates's use as audit samples is discussed. Field testing is also described in which paired-probe techniques were employed. They showed that, when sulfur trioxide is absent from the effluent streams, acid mist is efficiently collected by a single filter, even when the isopropyl alcohol-containing impinger is eliminated. Both ammonia and dimethyl analine, which are employed as gas scrubbers, cause sulfur dioxide to be retained in the isopropyl alcohol and result in large positive interferences in acid mist values. Ferric oxide, present in the effluents of steel pickling operations, causes a large negative interference in acid mist values.

  3. Quantitative methods for somatosensory evaluation in atypical odontalgia.

    PubMed

    Porporatti, André Luís; Costa, Yuri Martins; Stuginski-Barbosa, Juliana; Bonjardim, Leonardo Rigoldi; Conti, Paulo César Rodrigues; Svensson, Peter

    2015-01-01

    A systematic review was conducted to identify reliable somatosensory evaluation methods for atypical odontalgia (AO) patients. The computerized search included the main databases (MEDLINE, EMBASE, and Cochrane Library). The studies included used the following quantitative sensory testing (QST) methods: mechanical detection threshold (MDT), mechanical pain threshold (MPT) (pinprick), pressure pain threshold (PPT), dynamic mechanical allodynia with a cotton swab (DMA1) or a brush (DMA2), warm detection threshold (WDT), cold detection threshold (CDT), heat pain threshold (HPT), cold pain detection (CPT), and/or wind-up ratio (WUR). The publications meeting the inclusion criteria revealed that only mechanical allodynia tests (DMA1, DMA2, and WUR) were significantly higher and pain threshold tests to heat stimulation (HPT) were significantly lower in the affected side, compared with the contralateral side, in AO patients; however, for MDT, MPT, PPT, CDT, and WDT, the results were not significant. These data support the presence of central sensitization features, such as allodynia and temporal summation. In contrast, considerable inconsistencies between studies were found when AO patients were compared with healthy subjects. In clinical settings, the most reliable evaluation method for AO in patients with persistent idiopathic facial pain would be intraindividual assessments using HPT or mechanical allodynia tests. PMID:25627886

  4. Evaluation of pediatric manual wheelchair mobility using advanced biomechanical methods.

    PubMed

    Slavens, Brooke A; Schnorenberg, Alyssa J; Aurit, Christine M; Graf, Adam; Krzak, Joseph J; Reiners, Kathryn; Vogel, Lawrence C; Harris, Gerald F

    2015-01-01

    There is minimal research of upper extremity joint dynamics during pediatric wheelchair mobility despite the large number of children using manual wheelchairs. Special concern arises with the pediatric population, particularly in regard to the longer duration of wheelchair use, joint integrity, participation and community integration, and transitional care into adulthood. This study seeks to provide evaluation methods for characterizing the biomechanics of wheelchair use by children with spinal cord injury (SCI). Twelve subjects with SCI underwent motion analysis while they propelled their wheelchair at a self-selected speed and propulsion pattern. Upper extremity joint kinematics, forces, and moments were computed using inverse dynamics methods with our custom model. The glenohumeral joint displayed the largest average range of motion (ROM) at 47.1° in the sagittal plane and the largest average superiorly and anteriorly directed joint forces of 6.1% BW and 6.5% BW, respectively. The largest joint moments were 1.4% body weight times height (BW × H) of elbow flexion and 1.2% BW × H of glenohumeral joint extension. Pediatric manual wheelchair users demonstrating these high joint demands may be at risk for pain and upper limb injuries. These evaluation methods may be a useful tool for clinicians and therapists for pediatric wheelchair prescription and training. PMID:25802860

  5. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, Franklin R.; Bosilovich, Michael; Lyon, Bradfield; Funk, Chris

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period

  6. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Roberts, J. Brent; Bosilovich, Michael; Lyon, Bradfield

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period.

  7. A method for evaluating horizontal well pumping tests.

    PubMed

    Langseth, David E; Smyth, Andrew H; May, James

    2004-01-01

    Predicting the future performance of horizontal wells under varying pumping conditions requires estimates of basic aquifer parameters, notably transmissivity and storativity. For vertical wells, there are well-established methods for estimating these parameters, typically based on either the recovery from induced head changes in a well or from the head response in observation wells to pumping in a test well. Comparable aquifer parameter estimation methods for horizontal wells have not been presented in the ground water literature. Formation parameter estimation methods based on measurements of pressure in horizontal wells have been presented in the petroleum industry literature, but these methods have limited applicability for ground water evaluation and are based on pressure measurements in only the horizontal well borehole, rather than in observation wells. This paper presents a simple and versatile method by which pumping test procedures developed for vertical wells can be applied to horizontal well pumping tests. The method presented here uses the principle of superposition to represent the horizontal well as a series of partially penetrating vertical wells. This concept is used to estimate a distance from an observation well at which a vertical well that has the same total pumping rate as the horizontal well will produce the same drawdown as the horizontal well. This equivalent distance may then be associated with an observation well for use in pumping test algorithms and type curves developed for vertical wells. The method is shown to produce good results for confined aquifers and unconfined aquifers in the absence of delayed yield response. For unconfined aquifers, the presence of delayed yield response increases the method error. PMID:15457792

  8. Evaluation of the Mercy weight estimation method in Ouelessebougou, Mali

    PubMed Central

    2014-01-01

    Background This study evaluated the performance of a new weight estimation strategy (Mercy Method) with four existing weight-estimation methods (APLS, ARC, Broselow, and Nelson) in children from Ouelessebougou, Mali. Methods Otherwise healthy children, 2 mos to 16 yrs, were enrolled and weight, height, humeral length (HL) and mid-upper arm circumference (MUAC) obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights. Agreement between estimated and actual weight was determined using Bland-Altman plots with log-transformation. Predictive performance of each method was assessed using residual error (RE), percentage error (PE), root mean square error (RMSE), and percent predicted within 10, 20 and 30% of actual weight. Results 473 children (8.1 ± 4.8 yr, 25.1 ± 14.5 kg, 120.9 ± 29.5 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = 0.97 vs. 0.80-0.94). The MM also demonstrated the lowest ME (0.06 vs. 0.92-4.1 kg), MPE (1.6 vs. 7.8-19.8%) and RMSE (2.6 vs. 3.0-6.7). Finally, the MM estimated weight within 20% of actual for nearly all children (97%) as opposed to the other methods for which these values ranged from 50-69%. Conclusions The MM performed extremely well in Malian children with performance characteristics comparable to those observed for U.S and India and could be used in sub-Saharan African children without modification extending the utility of this weight estimation strategy. PMID:24650051

  9. Evaluation of Uranium Measurements in Water by Various Methods - 13571

    SciTech Connect

    Tucker, Brian J.; Workman, Stephen M.

    2013-07-01

    In December 2000, EPA amended its drinking water regulations for radionuclides by adding a Maximum Contaminant Level (MCL) for uranium (so called MCL Rule)[1] of 30 micrograms per liter (μg/L). The MCL Rule also included MCL goals of zero for uranium and other radionuclides. Many radioactively contaminated sites must test uranium in wastewater and groundwater to comply with the MCL rule as well as local publicly owned treatment works discharge limitations. This paper addresses the relative sensitivity, accuracy, precision, cost and comparability of two EPA-approved methods for detection of total uranium: inductively plasma/mass spectrometry (ICP-MS) and alpha spectrometry. Both methods are capable of measuring the individual uranium isotopes U-234, U- 235, and U-238 and both methods have been deemed acceptable by EPA. However, the U-238 is by far the primary contributor to the mass-based ICP-MS measurement, especially for naturally-occurring uranium, which contains 99.2745% U-238. An evaluation shall be performed relative to the regulatory requirement promulgated by EPA in December 2000. Data will be garnered from various client sample results measured by ALS Laboratory in Fort Collins, CO. Data shall include method detection limits (MDL), minimum detectable activities (MDA), means and trends in laboratory control sample results, performance evaluation data for all methods, and replicate results. In addition, a comparison will be made of sample analyses results obtained from both alpha spectrometry and the screening method Kinetic Phosphorescence Analysis (KPA) performed at the U.S. Army Corps of Engineers (USACE) FUSRAP Maywood Laboratory (UFML). Many uranium measurements occur in laboratories that only perform radiological analysis. This work is important because it shows that uranium can be measured in radiological as well as stable chemistry laboratories and it provides several criteria as a basis for comparison of two uranium test methods. This data will

  10. Further evaluation of the constrained least squares electromagnetic compensation method

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1991-01-01

    Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.

  11. Method for evaluating moisture tensions of soils using spectral data

    NASA Technical Reports Server (NTRS)

    Peterson, John B. (Inventor)

    1982-01-01

    A method is disclosed which permits evaluation of soil moisture utilizing remote sensing. Spectral measurements at a plurality of different wavelengths are taken with respect to sample soils and the bidirectional reflectance factor (BRF) measurements produced are submitted to regression analysis for development therefrom of predictable equations calculated for orderly relationships. Soil of unknown reflective and unknown soil moisture tension is thereafter analyzed for bidirectional reflectance and the resulting data utilized to determine the soil moisture tension of the soil as well as providing a prediction as to the bidirectional reflectance of the soil at other moisture tensions.

  12. In vitro evaluation method for screening of candidate prebiotic foods.

    PubMed

    Date, Yasuhiro; Nakanishi, Yumiko; Fukuda, Shinji; Nuijima, Yumi; Kato, Tamotsu; Umehara, Mikihisa; Ohno, Hiroshi; Kikuchi, Jun

    2014-01-01

    The aim of this work was to develop a simple and rapid in vitro evaluation method for screening and discovery of uncharacterised and untapped prebiotic foods. Using a NMR-based metabolomic approach coupled with multivariate statistical analysis, the metabolic profiles generated by intestinal microbiota after in vitro incubation with feces were examined. The viscous substances of Japanese bunching onion (JBOVS) were identified as one of the candidate prebiotic foods by this in vitro screening method. The JBOVS were primarily composed of sugar components, especially fructose-based carbohydrates. Our results suggested that ingestion of JBOVS contributed to lactate and acetate production by the intestinal microbiota, and were accompanied by an increase in the Lactobacillus murinus and Bacteroidetes sp. populations in the intestine and fluctuation of the host-microbial co-metabolic process. Therefore, our approach should be useful as a rapid and simple screening tool for potential prebiotic foods. PMID:24444934

  13. Non-destructive evaluation method employing dielectric electrostatic ultrasonic transducers

    NASA Technical Reports Server (NTRS)

    Yost, William T. (Inventor); Cantrell, Jr., John H. (Inventor)

    2003-01-01

    An acoustic nonlinearity parameter (.beta.) measurement method and system for Non-Destructive Evaluation (NDE) of materials and structural members novelly employs a loosely mounted dielectric electrostatic ultrasonic transducer (DEUT) to receive and convert ultrasonic energy into an electrical signal which can be analyzed to determine the .beta. of the test material. The dielectric material is ferroelectric with a high dielectric constant .di-elect cons.. A computer-controlled measurement system coupled to the DEUT contains an excitation signal generator section and a measurement and analysis section. As a result, the DEUT measures the absolute particle displacement amplitudes in test material, leading to derivation of the nonlinearity parameter (.beta.) without the costly, low field reliability methods of the prior art.

  14. Infrared non-destructive evaluation method and apparatus

    DOEpatents

    Baleine, Erwan; Erwan, James F; Lee, Ching-Pang; Stinelli, Stephanie

    2014-10-21

    A method of nondestructive evaluation and related system. The method includes arranging a test piece (14) having an internal passage (18) and an external surface (15) and a thermal calibrator (12) within a field of view (42) of an infrared sensor (44); generating a flow (16) of fluid characterized by a fluid temperature; exposing the test piece internal passage (18) and the thermal calibrator (12) to fluid from the flow (16); capturing infrared emission information of the test piece external surface (15) and of the thermal calibrator (12) simultaneously using the infrared sensor (44), wherein the test piece infrared emission information includes emission intensity information, and wherein the thermal calibrator infrared emission information includes a reference emission intensity associated with the fluid temperature; and normalizing the test piece emission intensity information against the reference emission intensity.

  15. Evaluation Method for Accessibility to Hollow Space of Carbon Nanotubes

    NASA Astrophysics Data System (ADS)

    Fan, Jing; Yudasaka, Masako; Miyawaki, Jin; Iijima, Sumio

    2004-03-01

    For application of single-wall carbon nanotubes (SWNTs) and nanohorns (SWNHs) to material-storage media, holes are opened usually by oxidation through which materials enter inside the hollow space of tubes. The holes are known to pass only molecules with diameters smaller than those of the holes, thus molecular-size selective storage of gases inside tubes becomes possible [1]. To enhance utilities of inner hollow space of carbon nanoutbes, controlled opening of holes is important. It is especially so for SWNHs, because holes with various diameters are potentially available due to the various types of defects on the tube walls. We studied the methods of hole-opening for SWNHs, and, at the same time, developed simple methods for evaluating the sizes of holes and the volumes of inner hollow-spaces. [1] Murata et al. J. Phys.Chem.

  16. A FEM-DEM technique for studying the motion of particles in non-Newtonian fluids. Application to the transport of drill cuttings in wellbores

    NASA Astrophysics Data System (ADS)

    Celigueta, Miguel Angel; Deshpande, Kedar M.; Latorre, Salvador; Oñate, Eugenio

    2016-04-01

    We present a procedure for coupling the finite element method (FEM) and the discrete element method (DEM) for analysis of the motion of particles in non-Newtonian fluids. Particles are assumed to be spherical and immersed in the fluid mesh. A new method for computing the drag force on the particles in a non-Newtonian fluid is presented. A drag force correction for non-spherical particles is proposed. The FEM-DEM coupling procedure is explained for Eulerian and Lagrangian flows, and the basic expressions of the discretized solution algorithm are given. The usefulness of the FEM-DEM technique is demonstrated in its application to the transport of drill cuttings in wellbores.

  17. Scatterscore : A reconnaissance method to evaluate changes in water quality

    SciTech Connect

    Kim, A.G.; Cardone, C.R.

    2005-12-01

    Water quality data collected in periodic monitoring programs are often difficult to evaluate, especially if the number of parameters is large, the sampling schedule varies, and values are of different orders of magnitude. The Scatterscore Water Quality Evaluation was developed to yield a quantitative score, based on all measured variables in periodic water quality reports, indicating positive, negative or random change. This new methodology calculates a reconnaissance score based on the differences between up-gradient (control) versus down-gradient (treatment) water quality data sets. All parameters measured over a period of time at two or more sampling points are compared. The relationship between the ranges of measured values and the ratio of the medians for each parameter produces a data point that falls into one of four sections on a scattergram. The number and average values of positive, negative and random change points is used to calculate a Scatterscore that indicates the magnitude and direction of overall change in water quality. The Scatterscore Water Quality Evaluation, a reconnaissance method to track general changes, has been applied to 20 sites at which coal utilization by-products (CUB) were used to control acid mine drainage (AMD).

  18. Evaluating nursing outcomes: a mixed-methods approach.

    PubMed

    Lane-Tillerson, Crystal; Davis, Bertha L; Killion, Cheryl M; Baker, Spencer

    2005-12-01

    Being overweight is regarded as the most common nutritional disorder of children and adolescents in the United States. The escalating problem of being overweight or being obese in our society indicates the need for treatment strategies that encompass an all-inclusive approach. Moreover, these strategies need to be comprehensively evaluated for their effectiveness. Nurses are in an excellent position to ensure that this occurs. The purpose of this study was to determine whether using a mixed-methods approach was an efficacious way to provide a comprehensive evaluation of the behavior modification benefits of a weight loss/weight management nursing intervention in African-American adolescent girls (13-17 years of age). The overall effectiveness of the intervention was evaluated by analyzing pre- and post-program measures of weight, body mass index, cholesterol, blood pressure, self-esteem, depression, and body image (quantitative data); conducting focus groups with mothers of the participants; and administering open-ended, written questionnaires to the participants (qualitative data). Findings from the quantitative data indicated favorable outcomes in weight, blood pressure, cholesterol, body mass index, self-esteem, and body image, indicating that progress had been made over the course of the program. Furthermore, qualitative data indicated that mothers of the participants observed positive behavioral changes related to eating and exercise patterns and participants demonstrated perception of these changes as well. PMID:16570643

  19. Evaluation of a photographic method to measure dental angulation

    PubMed Central

    Amorim, Jordana Rodrigues; Macedo, Diogo de Vasconcelos; Normando, David

    2014-01-01

    Objective To analyze the reliability and reproducibility of a simplified method for analysis of dental angulation using digital photos of plaster dental casts. Methods Digital and standardized photographs of plaster casts were performed and posteriorly imported to an angle reading graphic program in order to have measurements obtained. Such procedures were repeated to evaluate the random error and to analyze reproducibility through intraclass correlation. The sample consisted of 12 individuals (six male and six female) with full permanent dentition orthodontically untreated. The analyses were bilaterally carried out, and generated 24 measurements. Results The random error showed variation of 0.77 to 2.55 degrees for teeth angulation. The statistical analysis revealed that the method presents excellent reproducibility (p < 0.0001) for all teeth, except for the upper premolars. In spite of that, it is still considered statistically significant (p < 0.001). Conclusion The proposed method presents enough reliability that justifies its use in the development of scientific research as well as in clinical practice. PMID:24945518

  20. Evaluation of mercury speciation by EPA (Draft) Method 29

    SciTech Connect

    Laudal, D.L.; Heidt, M.K.; Nott, B.

    1995-11-01

    The 1990 Clean Air Act Amendments require that the U.S. Environmental protection Agency (EPA) assess the health risks associated with mercury emissions. Also, the law requires a separate assessment of health risks posed by the emission of 189 tract chemicals (including mercury) for electric utility steam-generating units. In order to conduct a meaningful assessment of health and environmental effects, we must have, among other things, a reliable and accurate method to measure mercury emissions. In addition, the rate of mercury deposition and the type of control strategies used may depend upon the type of mercury emitted (i.e., whether it is in the oxidized or elemental form). It has been speculated that EPA (Draft) Method 29 can speciate mercury by selective absorption; however, this claim has yet to be proven. The Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE) have contracted with the Energy & Environmental Research Center (EERC) at University of North Dakota to evaluate EPA (Draft) Method 29 at the pilot-scale level. The objective of the work is to determine whether EPA (Draft) Method 29 can reliably quantify and speciate mercury in the flue gas from coal-fired boilers.

  1. Quality Evaluation of Pork with Various Freezing and Thawing Methods

    PubMed Central

    2014-01-01

    In this study, the physicochemical and sensory quality characteristics due to the influence of various thawing methods on electro-magnetic and air blast frozen pork were examined. The packaged pork samples, which were frozen by air blast freezing at −45℃ or electro-magnetic freezing at −55℃, were thawed using 4 different methods: refrigeration (4±1℃), room temperature (RT, 25℃), cold water (15℃), and microwave (2450 MHz). Analyses were carried out to determine the drip and cooking loss, water holding capacity (WHC), moisture content and sensory evaluation. Frozen pork thawed in a microwave indicated relatively less thawing loss (0.63-1.24%) than the other thawing methods (0.68-1.38%). The cooking loss after electro-magnetic freezing indicated 37.4% by microwave thawing, compared with 32.9% by refrigeration, 36.5% by RT, and 37.2% by cold water in ham. The thawing of samples frozen by electro-magnetic freezing showed no significant differences between the methods used, while the moisture content was higher in belly thawed by microwave (62.0%) after electro-magnetic freezing than refrigeration (54.8%), RT (61.3%), and cold water (61.1%). The highest overall acceptability was shown for microwave thawing after electro-magnetic freezing but there were no significant differences compared to that of the other samples. PMID:26761493

  2. Development of nondestructive evaluation methods for structural ceramics

    SciTech Connect

    Ellingson, W.A.; Koehl, R.D.; Wilson, J.A.; Stuckey, J.B.; Engel, H.P. |

    1996-04-01

    Nondestructive evaluation (NDE) methods using three-dimensional microfocus X-ray computed tomographic imaging (3DXCT) were employed to map axial and radial density variations in hot-gas filters and heat exchanger tubes. 3D XCT analysis was conducted on (a) two 38-mm-OD, 6.5-mm wall, SiC/SiC heat exchanger tubes infiltrated by CVI; (b) eight 10 cm diam. oxide/oxide heat exchanger tubes; and (c) one 26-cm-long Nextel fiber/SiC matrix hot-gas filter. The results show that radial and axial density uniformity as well as porosity, can be assessed by 3D XCT. NDE methods are also under development to assess thermal barrier coatings which are under development as methods to protect gas-turbine first-stage hot section metallic substrates. Further, because both shop and field joining of CFCC materials will be necessary, work is now beginning on development of NDE methods for joining.

  3. Single well tracer method to evaluate enhanced recovery

    DOEpatents

    Sheely, Jr., Clyde Q.; Baldwin, Jr., David E.

    1978-01-01

    Data useful to evaluate the effectiveness of or to design an enhanced recovery process (the recovery process involving mobilizing and moving hydrocarbons through a hydrocarbon-bearing subterranean formation from an injection well to a production well by injecting a mobilizing fluid into the injection well) are obtained by a process which comprises sequentially: determining hydrocarbon saturation in the formation in a volume in the formation near a well bore penetrating the formation, injecting sufficient of the mobilizing fluid to mobilize and move hydrocarbons from a volume in the formation near the well bore penetrating the formation, and determining by the single well tracer method a hydrocarbon saturation profile in a volume from which hydrocarbons are moved. The single well tracer method employed is disclosed by U.S. Pat. No. 3,623,842. The process is useful to evaluate surfactant floods, water floods, polymer floods, CO.sub.2 floods, caustic floods, micellar floods, and the like in the reservoir in much less time at greatly reduced costs, compared to conventional multi-well pilot test.

  4. Survey and evaluation of aging risk assessment methods and applications

    SciTech Connect

    Sanzo, D.L.; Kvam, P.; Apostolakis, G.; Wu, J.; Milici, T.; Ghoniem, N.; Guarro, S.

    1993-11-01

    The Nuclear Regulatory Commission (NRC) initiated the nuclear power plant aging research (NPAR) program about 6 years ago to gather information about nuclear power plant aging. Since then, this program has collected a significant amount of information, largely qualitative, on plant aging and its potential effects on plant safety. However, this body of knowledge has not yet been integrated into formalisms that can be used effectively and systematically to assess plant risk resulting from aging, although models for assessing the effect of increasing failure rates on core damage frequency have been proposed. The purpose of this review is to survey the work conducted to address the aging of systems, structures, and components (SSCs) of nuclear power plants (NPPs), as well as the associated data bases. The review takes a critical look at the need to revise probabilistic risk assessment (PRAs) so that they will include the contribution to risk from plant aging, the adequacy of existing methods for evaluating this contribution, and the adequacy of the data that have been used in these evaluation methods. A preliminary framework is identified for integrating the aging of SSCs into the PRA, including the identification of needed data for such an integration.

  5. A power flow method for evaluating vibration from underground railways

    NASA Astrophysics Data System (ADS)

    Hussein, M. F. M.; Hunt, H. E. M.

    2006-06-01

    One of the major sources of ground-borne vibration is the running of trains in underground railway tunnels. Vibration is generated at the wheel-rail interface, from where it propagates through the tunnel and surrounding soil into nearby buildings. An understanding of the dynamic interfaces between track, tunnel and soil is essential before engineering solutions to the vibration problem can be found. A new method has been developed to evaluate the effectiveness of vibration countermeasures. The method is based on calculating the mean power flow from the tunnel, paying attention to that part of the power which radiates upwards to places where buildings' foundations are expected to be found. The mean power is calculated for an infinite train moving through the tunnel with a constant velocity. An elegant mathematical expression for the mean power flow is derived, which can be used with any underground-tunnel model. To evaluate the effect of vibration countermeasures and track properties on power flow, a comprehensive three-dimensional analytical model is used. It consists of Euler-Bernoulli beams to account for the rails and the track slab. These are coupled in the wavenumber-frequency domain to a thin shell representing the tunnel embedded within an infinite continuum, with a cylindrical cavity representing the surrounding soil.

  6. Methods for external event screening quantification: Risk Methods Integration and Evaluation Program (RMIEP) methods development

    SciTech Connect

    Ravindra, M.K.; Banon, H.

    1992-07-01

    In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC`s PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described.

  7. Methods for external event screening quantification: Risk Methods Integration and Evaluation Program (RMIEP) methods development

    SciTech Connect

    Ravindra, M.K.; Banon, H. )

    1992-07-01

    In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described.

  8. Dem Generation with WORLDVIEW-2 Images

    NASA Astrophysics Data System (ADS)

    Büyüksalih, G.; Baz, I.; Alkan, M.; Jacobsen, K.

    2012-07-01

    For planning purposes 42 km coast line of the Black Sea, starting at the Bosporus going in West direction, with a width of approximately 5 km, was imaged by WorldView-2. Three stereo scenes have been oriented at first by 3D-affine transformation and later by bias corrected RPC solution. The result is nearly the same, but it is limited by identification of the control points in the images. Nevertheless after blunder elimination by data snooping root mean square discrepancies below 1 pixel have been reached. The root mean square discrepancy at control point height reached 0.5 m up to 1.3 m with a base to height relation between 1:1.26 and 1:1.80. Digital Surface models (DSM) with 4 m spacing have been generated by least squares matching with region growing, supported by image pyramids. A higher percentage of the mountainous area is covered by forest, requiring the approximation based on image pyramids. In the forest area the approximation just by region growing leads to larger gaps in the DSM. Caused by the good image quality of WorldView-2 the correlation coefficients reached by least squares matching are high and even in most forest areas a satisfying density of accepted points was reached. Two stereo models have an overlapping area of 1.6 km times 6.7 km allowing an accuracy evaluation. Small, but nevertheless significant differences in scene orientation have been eliminated by least squares shift of both overlapping height models to each other. The root mean square differences of both independent DSM are 1.06m or as a function of terrain inclination 0.74 m + 0.55 m  tangent (slope). The terrain inclination in the average is 7° with 12% exceeding 17°. The frequency distribution of height discrepancies is not far away from normal distribution, but as usual, larger discrepancies are more often available as corresponding to normal distribution. This also can be seen by the normalized medium absolute deviation (NMAS) related to 68% probability level of 0.83m

  9. Evaluation of automated brain MR image segmentation and volumetry methods.

    PubMed

    Klauschen, Frederick; Goldman, Aaron; Barra, Vincent; Meyer-Lindenberg, Andreas; Lundervold, Arvid

    2009-04-01

    We compare three widely used brain volumetry methods available in the software packages FSL, SPM5, and FreeSurfer and evaluate their performance using simulated and real MR brain data sets. We analyze the accuracy of gray and white matter volume measurements and their robustness against changes of image quality using the BrainWeb MRI database. These images are based on "gold-standard" reference brain templates. This allows us to assess between- (same data set, different method) and also within-segmenter (same method, variation of image quality) comparability, for both of which we find pronounced variations in segmentation results for gray and white matter volumes. The calculated volumes deviate up to >10% from the reference values for gray and white matter depending on method and image quality. Sensitivity is best for SPM5, volumetric accuracy for gray and white matter was similar in SPM5 and FSL and better than in FreeSurfer. FSL showed the highest stability for white (<5%), FreeSurfer (6.2%) for gray matter for constant image quality BrainWeb data. Between-segmenter comparisons show discrepancies of up to >20% for the simulated data and 24% on average for the real data sets, whereas within-method performance analysis uncovered volume differences of up to >15%. Since the discrepancies between results reach the same order of magnitude as volume changes observed in disease, these effects limit the usability of the segmentation methods for following volume changes in individual patients over time and should be taken into account during the planning and analysis of brain volume studies. PMID:18537111

  10. Interpolation and elevation errors: the impact of the DEM resolution

    NASA Astrophysics Data System (ADS)

    Achilleos, Georgios A.

    2015-06-01

    Digital Elevation Models (DEMs) are developing and evolving at a fast pace, given the progress of computer science and technology. This development though, is not accompanied by an advancement of knowledge on the quality of the models and their indigenous inaccuracy. The user on most occasions is not aware of this quality thus in not aware of the correlating product uncertainty. Extensive research has been conducted - and still is - towards this direction. In the research presented in this paper there is an analysis of elevation errors behavior which are recorded in a DEM. The behavior of these elevation errors, is caused by altering the DEM resolution upon the application of the algorithm interpolation. Contour lines are used as entry data from a topographical map. Elevation errors are calculated in the positions of the initial entry data and wherever the elevation is known. The elevation errors that are recorded, are analyzed, in order to reach conclusions about their distribution and the way in which they occur.

  11. The Global Tandem-X Dem: Production Status and First Validation Results

    NASA Astrophysics Data System (ADS)

    Huber, M.; Gruber, A.; Wendleder, A.; Wessel, B.; Roth, A.; Schmitt, A.

    2012-07-01

    The TanDEM-X mission will derive a global digital elevation model (DEM) with satellite SAR interferometry. Two radar satellites (TerraSAR-X and TanDEM-X) will map the Earth in a resolution and accuracy with an absolute height error of 10m and a relative height error of 2m for 90% of the data. In order to fulfill the height requirements in general two global coverages are acquired and processed. Besides the final TanDEM-X DEM, an intermediate DEM with reduced accuracy is produced after the first coverage is completed. The last step in the whole workflow for generating the TanDEM-X DEM is the calibration of remaining systematic height errors and the merge of single acquisitions to 1°x1° DEM tiles. In this paper the current status of generating the intermediate DEM and first validation results based on GPS tracks, laser scanning DEMs, SRTM data and ICESat points are shown for different test sites.

  12. Evaluation of a method for assessing pulmonary function in laryngectomees.

    PubMed

    Castro, M A; Dedivitis, R A; Macedo, A G

    2011-08-01

    In total laryngectomies the impairment of pulmonary function reflects the sum of pre- and post-operative ventilatory changes. Objective information on the respiratory condition in laryngectomees, as assessed in the pulmonary function laboratory is somewhat limited, perhaps because of difficulties related to methodology. The aim of our study was to evaluate the reproducibility of a method employed to assess the pulmonary function in laryngectomized patients. The experimental extra-tracheal device was set up with a silicone adapter through a cardboard tube to the skin around the tracheostoma. Pulmonary function tests included measurements of forced vital capacity, force expiratory volume at 1 second and Tiffeneau index in 3 consecutive evaluations, in 11 patients who underwent total laryngectomy. The control group comprised 11 patients, not laryngectomized, evaluated by conventional spirometry. Those responsible for evaluating were asked to report possible technical failures and to demonstrate the reproducibility of the curves resulting from the tests. The use of the silicone adapter and skin adhesive provided a complete, airtight seal of the system, in all cases. The presence of the tracheo-oesophageal prosthesis did not negatively affect the test results. All patients attributed a maximum value, both for comfort and acceptance, of the device. The values are comparable in both groups, thus indicating the accuracy of the proposed methodology. All examinations were reproducible. After total laryngectomy, pulmonary function testing, with an extra-tracheal device, is not only reliable but also easy to perform in a routine out-patient setting. The methodology did not present air leaks and was, therefore, well accepted by all patients tested. PMID:22065707

  13. Using crowdsourcing to evaluate published scientific literature: methods and example.

    PubMed

    Brown, Andrew W; Allison, David B

    2014-01-01

    Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, "Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?" 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters' conclusions was moderate (κ = 0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors' expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and low

  14. Tropical-Forest Biomass Dynamics from X-Band, TanDEM-X DATA

    NASA Astrophysics Data System (ADS)

    Treuhaft, R. N.; Neumann, M.; Keller, M. M.; Goncalves, F. G.; Santos, J. R.

    2015-12-01

    The measurement of the change in above ground biomass (AGB) is key for understanding the carbon sink/source nature of tropical forests. Interferometric X-band radar from the only orbiting interferometer, TanDEM-X, shows sensitivity to standing biomass up to at least 300 Mg/ha. This sensitivity may be due in part to the propagation of the shorter X-band wavelength (0.031 m) through holes in the canopy. This talk focuses on estimating the change in AGB over time. Interferometric baselines from TanDEM-X have been obtained in Tapajós National Forest in the Brazilian Amazon over a 4-year period, from 2011 to 2015. Lidar measurements were also acquired during this period. Field measurements of height, height-to-base-of-crown, species, diameter, and position were acquired in 2010, 2013, and 2015. We show interferometric phase height changes, and suggest how these phase height changes are related to biomass change. First we show height changes between baselines separated by one month, over which we expect no change in AGB, to evaluate precision. We find an RMS of <2 m for ~85 stands in the phase height over one month, corresponding to about a 10% measurement of change, which suggests we can detect about a 17 Mg/ha change in AGB at Tapajos. In contrast, interferometric height changes over the period 2011 to 2014 have larger RMS scatters of > 3 m, due to actual change. Most stands show changes in interferometric phase height consistent with regrowth (~10 Mg/ha/yr), and several stands show abrupt, large changes in phase height (>10 m) due to logging and natural disturbance. At the end of 2015, we will acquire more TanDEM-X data over Tapajos, including an area subjected to selective logging. We are doing "before" (March 2015) and "after" (October 2015) fieldwork to be able to understand the signature of change due to selective logging in TanDEM-X interferometric data.

  15. Temporal monitoring of Bardarbunga volcanic activity with TanDEM-X

    NASA Astrophysics Data System (ADS)

    Rossi, C.; Minet, C.; Fritz, T.; Eineder, M.; Erten, E.

    2015-12-01

    On August 29, 2014, a volcanic activity started in the lava field of Holuhraun, at the north east of the Bardarbunga caldera in Iceland. The activity was declared finished on February 27, 2015, thus lasting for about 6 months. During these months the magma chamber below the caldera slowly emptied, causing the rare event of caldera collapse. In this scenario, TanDEM-X remote sensing data is of particular interest. By producing high-resolution and accurate elevation models of the caldera, it is possible to evaluate volume losses and topographical changes useful to increase the knowledge about the volcanic activity dynamics. 5 TanDEM-X InSAR acquisitions have been commanded between August 01, 2014 and November 08, 2014. 2 acquisitions have been commanded before the eruption and 3 acquisitions afterwards. To fully cover the volcanic activity, also the lava flow area at the north-west of the caldera has been monitored and a couple of acquisitions have been employed to reveal the subglacial graben structure and the lava path. In this context, the expected elevation accuracy is studied on two levels. Absolute height accuracy is analyzed by inspecting the signal propagation at X-band in the imaged medium. Relative height accuracy is analyzed by investigating the InSAR system parameters and the local geomorphology. It is shown how the system is very well accurate with mean height errors below the meter. Moreover, neither InSAR processing issues, e.g. phase unwrapping errors, nor complex DEM calibration aspects are problems to tackle. Caldera is imaged in its entirety and new cauldron formations and, in general, the complete restructuring of the glacial volcanic system is well represented. An impressive caldera volume loss of about 1 billion cubic meters is measured in about two months. The dyke propagation from the Bardarbunga cauldron to the Holuhraun lava field is also revealed and a graben structure with a width of up to 1 km and a sinking of a few meters is derived

  16. An evaluation of teaching methods in the introductory physics classroom

    NASA Astrophysics Data System (ADS)

    Savage, Lauren Michelle Williams

    The introductory physics mechanics course at the University of North Carolina at Charlotte has a history of relatively high DFW rates. In 2011, the course was redesigned from the traditional lecture format to the inverted classroom format (flipped). This format inverts the classroom by introducing material in a video assigned as homework while the instructor conducts problem solving activities and guides discussions during the regular meetings. This format focuses on student-centered learning and is more interactive and engaging. To evaluate the effectiveness of the new method, final exam data over the past 10 years was mined and the pass rates examined. A normalization condition was developed to evaluate semesters equally. The two teaching methods were compared using a grade distribution across multiple semesters. Students in the inverted class outperformed those in the traditional class: "A"s increased by 22% and "B"s increased by 38%. The final exam pass rate increased by 12% under the inverted classroom approach. The same analysis was used to compare the written and online final exam formats. Surprisingly, no students scored "A"s on the online final. However, the percent of "B"s increased by 136%. Combining documented best practices from a literature review with personal observations of student performance and attitudes from first hand classroom experience as a teaching assistant in both teaching methods, reasons are given to support the continued use of the inverted classroom approach as well as the online final. Finally, specific recommendations are given to improve the course structure where weaknesses have been identified.

  17. Development of nondestructive evaluation methods for structural ceramics

    SciTech Connect

    Ellingson, W.A.; Koehl, R.D.; Stuckey, J.B.; Sun, J.G.; Engel, H.P.; Smith, R.G.

    1997-06-01

    Development of nondestructive evaluation (NDE) methods for application to fossil energy systems continues in three areas: (a) mapping axial and radial density gradients in hot gas filters, (b) characterization of the quality of continuous fiber ceramic matrix composite (CFCC) joints and (c) characterization and detection of defects in thermal barrier coatings. In this work, X-ray computed tomographic imaging was further developed and used to map variations in the axial and radial density of two full length (2.3-m) hot gas filters. The two filters differed in through wall density because of the thickness of the coating on the continuous fibers. Differences in axial and through wall density were clearly detected. Through transmission infrared imaging with a highly sensitivity focal plane array camera was used to assess joint quality in two sets of SiC/SiC CFCC joints. High frame rate data capture suggests that the infrared imaging method holds potential for the characterization of CFCC joints. Work to develop NDE methods that can be used to evaluate electron beam physical vapor deposited coatings with platinum-aluminide (Pt-Al) bonds was undertaken. Coatings of Zirconia with thicknesses of 125 {micro}m (0.005 in.), 190 {micro}m (0.0075 in.), and 254 {micro}m (0.010 in.) with a Pt-Al bond coat on Rene N5 Ni-based superalloy were studied by infrared imaging. Currently, it appears that thickness variation, as well as thermal properties, can be assessed by infrared technology.

  18. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  19. Numerical Weather Predictions Evaluation Using Spatial Verification Methods

    NASA Astrophysics Data System (ADS)

    Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain-­-Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is co­financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-­-2013).

  20. Analysis and Validation of Grid dem Generation Based on Gaussian Markov Random Field

    NASA Astrophysics Data System (ADS)

    Aguilar, F. J.; Aguilar, M. A.; Blanco, J. L.; Nemmaoui, A.; García Lorca, A. M.

    2016-06-01

    Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m2) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.

  1. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  2. Evaluation of a bayesian coalescent method of species delimitation.

    PubMed

    Zhang, Chi; Zhang, De-Xing; Zhu, Tianqi; Yang, Ziheng

    2011-12-01

    A Bayesian coalescent-based method has recently been proposed to delimit species using multilocus genetic sequence data. Posterior probabilities of different species delimitation models are calculated using reversible-jump Markov chain Monte Carlo algorithms. The method accounts for species phylogenies and coalescent events in both extant and extinct species and accommodates lineage sorting and uncertainties in the gene trees. Although the method is theoretically appealing, its utility in practical data analysis is yet to be rigorously examined. In particular, the analysis may be sensitive to priors on ancestral population sizes and on species divergence times and to gene flow between species. Here we conduct a computer simulation to evaluate the statistical performance of the method, such as the false negatives (the error of lumping multiple species into one) and false positives (the error of splitting one species into several). We found that the correct species model was inferred with high posterior probability with only one or two loci when 5 or 10 sequences were sampled from each population, or with 50 loci when only one sequence was sampled. We also simulated data allowing migration under a two-species model, a mainland-island model and a stepping-stone model to assess the impact of gene flow (hybridization or introgression). The behavior of the method was diametrically different depending on the migration rate. Low rates at < 0.1 migrants per generation had virtually no effect, so that the method, while assuming no hybridization between species, identified distinct species despite small amounts of gene flow. This behavior appears to be consistent with biologists' practice. In contrast, higher migration rates at ≥ 10 migrants per generation caused the method to infer one species. At intermediate levels of migration, the method is indecisive. Our results suggest that Bayesian analysis under the multispecies coalescent model may provide important insights into

  3. Development of nondestructive evaluation methods for structural ceramics.

    SciTech Connect

    Ellingson, W. A.

    1998-08-19

    During the past year, the focus of our work on nondestructive evaluation (NDE) methods was on the development and application of these methods to technologies such as ceramic matrix composite (CMC) hot-gas filters, CMC high-temperature heat exchangers, and CMC ceramic/ceramic joining. Such technologies are critical to the ''Vision 21 Energy-Plex Fleet'' of modular, high-efficiency, low-emission power systems. Specifically, our NDE work has continued toward faster, higher sensitivity, volumetric X-ray computed tomographic imaging with new amorphous silicon detectors to detect and measure axial and radial density variations in hot-gas filters and heat exchangers; explored the potential use of high-speed focal-plane-array infrared imaging technology to detect delaminations and variations in the thermal properties of SiC/SiC heat exchangers; and explored various NDE methods to characterize CMC joints in cooperation with various industrial partners. Work this year also addressed support of Southern Companies Services Inc., Power Systems Development Facility, where NDE is needed to assess the condition of hot-gas candle filters. This paper presents the results of these efforts.

  4. Global metabolite analysis of yeast: evaluation of sample preparation methods.

    PubMed

    Villas-Bôas, Silas G; Højer-Pedersen, Jesper; Akesson, Mats; Smedsgaard, Jørn; Nielsen, Jens

    2005-10-30

    Sample preparation is considered one of the limiting steps in microbial metabolome analysis. Eukaryotes and prokaryotes behave very differently during the several steps of classical sample preparation methods for analysis of metabolites. Even within the eukaryote kingdom there is a vast diversity of cell structures that make it imprudent to blindly adopt protocols that were designed for a specific group of microorganisms. We have therefore reviewed and evaluated the whole sample preparation procedures for analysis of yeast metabolites. Our focus has been on the current needs in metabolome analysis, which is the analysis of a large number of metabolites with very diverse chemical and physical properties. This work reports the leakage of intracellular metabolites observed during quenching yeast cells with cold methanol solution, the efficacy of six different methods for the extraction of intracellular metabolites, and the losses noticed during sample concentration by lyophilization and solvent evaporation. A more reliable procedure is suggested for quenching yeast cells with cold methanol solution, followed by extraction of intracellular metabolites by pure methanol. The method can be combined with reduced pressure solvent evaporation and therefore represents an attractive sample preparation procedure for high-throughput metabolome analysis of yeasts. PMID:16240456

  5. Improvement of dem Generation from Aster Images Using Satellite Jitter Estimation and Open Source Implementation

    NASA Astrophysics Data System (ADS)

    Girod, L.; Nuth, C.; Kääb, A.

    2015-12-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a source of stereoscopic images covering the whole globe at a 15m resolution at a consistent quality for over 15 years. The potential of this data in terms of geomorphological analysis and change detection in three dimensions is unrivaled and needs to be exploited. However, the quality of the DEMs and ortho-images currently delivered by NASA (ASTER DMO products) is often of insufficient quality for a number of applications such as mountain glacier mass balance. For this study, the use of Ground Control Points (GCPs) or of other ground truth was rejected due to the global "big data" type of processing that we hope to perform on the ASTER archive. We have therefore developed a tool to compute Rational Polynomial Coefficient (RPC) models from the ASTER metadata and a method improving the quality of the matching by identifying and correcting jitter induced cross-track parallax errors. Our method outputs more accurate DEMs with less unmatched areas and reduced overall noise. The algorithms were implemented in the open source photogrammetric library and software suite MicMac.

  6. Tectonic development of the Northwest Bonaparte Basin, Australia by using Digital Elevation Model (DEM)

    NASA Astrophysics Data System (ADS)

    Wahid, Ali; Salim, Ahmed Mohamed Ahmed; Ragab Gaafar, Gamal; Yusoff, AP Wan Ismail Wan

    2016-02-01

    The Bonaparte Basin consist of majorly offshore part is situated at Australia's NW continental margin, covers an area of approx. 270,000km2. Bonaparte Basin having a number of sub-basins and platform areas of Paleozoic and Mesozoic is structurally complex. This research established the geologic and geomorphologic studies using Digital Elevation Model (DEM) as a substitute approach in morphostructural analysis to unravel the geological complexities. Although DEMs have been in practice since 1990s, they still have not become common tool for mapping studies. The research work comprised of regional structural analysis with the help of integrated elevation data, satellite imageries, available open topograhic images and internal geological maps with interpreted seismic. The structural maps of the study area have been geo-referenced which further overlaid onto SRTM data and satellite images for combined interpretation which facilitate to attain Digital Elevation Model of the study area. The methodology adopts is to evaluate and redefine development of geodynamic processes involved in formation of Bonaparte Basin. The main objectives is to establish the geological histories by using digital elevation model. The research work will be useful to incorporate different tectonic events occurred at different Geological times in a digital elevation model. The integrated tectonic analysis of different digital data sets benefitted substantially from combining them into a common digital database. Whereas, the visualization software facilitates the overlay and combined interpretation of different data sets which is helpful to reveal hidden information not obvious or accessible otherwise for regional analysis.

  7. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    PubMed

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267

  8. Development of nondestructive evaluation methods for ceramic coatings.

    SciTech Connect

    Sun, J. G.

    2007-01-01

    Various nondestructive evaluation (NDE) technologies are being developed to advance the knowledge of ceramic coatings for components in the hot gas-path of advanced, low-emission gas-fired turbine engines. The ceramic coating systems being studied by NDE include thermal barrier coatings (TBCs) and environmental barrier coatings (EBCs). TBCs are under development for vanes, blades and combustor liners to allow hotter gas path temperatures and EBCs are under development to reduce environmental damage to high temperature components made of ceramic matrix composites (CMCs). Data provided by NDE methods will be used to: (a) provide data to assess reliability of new coating application processes, (b) identify defective components that could cause unscheduled outages (c) track growth rates of defects during use in engines and (d) allow rational judgement for replace/repair/re-use decisions of components.

  9. Assessment of oil polarity: comparison of evaluation methods.

    PubMed

    El-Mahrab-Robert, M; Rosilio, V; Bolzinger, M-A; Chaminade, P; Grossiord, J-L

    2008-02-01

    In multiple emulsion systems, oily or aqueous transfers may occur between the dispersed droplets through the continuous phase. These transfers are controlled by both the surfactant system (micellar transport), and the partial solubility of one phase in another (molecular transport). The latter could be anticipated from the knowledge of oil polarity, if this information could easily be obtained. In this work, the relative polarity of eight oils used for various purposes has been evaluated from the comparison of their dielectric requirement for solubilization, their interfacial tension and chromatographic analysis. The results showed the complementarities of HPLC analysis and interfacial tension measurements and their superiority over the solubilization method for classifying oils as a function of their polarity. PMID:17728082

  10. Entropy-based method to evaluate the data integrity

    NASA Astrophysics Data System (ADS)

    Peng, Xu; Tianyu, Ma; Yongjie, Jin

    2006-12-01

    Projection stage of single photon emission computed tomography (SPECT) was discussed to analyze the characteristics of information transmission and evaluate the data integrity. Information is transferred from the source to the detector in the photon emitting process. In the projection stage, integrity of projection data can be assessed by the information entropy, which is the conditional entropy standing for the average uncertainty of the source object under the condition of projection data. Simulations were performed to study projection data of emission-computed tomography with a pinhole collimator. Several types of collimators were treated. Results demonstrate that the conditional entropy shows the data integrity, and indicate how the algorithms are matched or mismatched to the geometry. A new method for assessing data integrity is devised for those decision makers to help improve the quality of image reconstruction.

  11. Evaluating the Senior Companion Program: a mixed-method approach.

    PubMed

    Butler, Sandra S

    2006-01-01

    This article reports on a mixed-method assessment of the Senior Companion Program (SCP), a federal program which provides volunteer opportunities with small stipends to low-income older adults, 60 years of age and older, who provide companionship and offer assistance to frail community elders. Through four standardized scales and open-ended questions regarding the impact of the SCP in their lives, 34 Senior Companion volunteers and 32 of their clients were interviewed. Informants reported relatively large social networks and low levels of depression and loneliness. Thematic analysis of the qualitative data revealed the benefits of the program for both volunteers and their clients. Themes emerging from the rich narratives included: companionship, independence, reduced anxiety, giving, and rewards. The article concludes with a suggested brief evaluation instrument that directors of SCP programs, and other similar programs, can use to collect data on the impact of their program--something that is required, but often difficult to do. PMID:16901877

  12. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    PubMed Central

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267

  13. An evaluation of methods for scaling aircraft noise perception

    NASA Technical Reports Server (NTRS)

    Ollerhead, J. B.

    1971-01-01

    One hundred and twenty recorded sounds, including jets, turboprops, piston engined aircraft and helicopters were rated by a panel of subjects in a paired comparison test. The results were analyzed to evaluate a number of noise rating procedures in terms of their ability to accurately estimate both relative and absolute perceived noise levels. It was found that the complex procedures developed by Stevens, Zwicker and Kryter are superior to other scales. The main advantage of these methods over the more convenient weighted sound pressure level scales lies in their ability to cope with signals over a wide range of bandwidth. However, Stevens' loudness level scale and the perceived noise level scale both overestimate the growth of perceived level with intensity because of an apparent deficiency in the band level summation rule. A simple correction is proposed which will enable these scales to properly account for the experimental observations.

  14. Evaluation of heart rate changes: electrocardiographic versus photoplethysmographic methods

    NASA Technical Reports Server (NTRS)

    Low, P. A.; Opfer-Gehrking, T. L.; Zimmerman, I. R.; O'Brien, P. C.

    1997-01-01

    The heart rate (HR) variation to forced deep breathing (HRDB) and to the Valsalva maneuver (Valsalva ratio; VR) are the two most widely used tests of cardiovagal function in human subjects. The HR is derived from a continuously running electrocardiographic (ECG) recording. Recently, HR derived from the arterial waveform became available on the Finapres device (FinapHR), but its ability to detect rapid changes in HR remains uncertain. We therefore evaluated HRDB and VR derived from FinapHR using ECG-derived HR (ECGHR) recordings as the standard. We also compared the averaged HR on Finapres (Finapav) with beat-to-beat Finapres (FinapBB) values. Studies were undertaken in 12 subjects with large HR variations: age, 34.5 +/- 9.3 (SD) years; six males and six females. FinapBB values were superimposable upon ECGHR for both HRDB and VR. In contrast, Finapav failed to follow ECGHR for HRDB and followed HRECG with a lag for the VR. To evaluate statistically how closely FinapHR approximated ECGHR, we undertook regression analysis, using mean values for each subject. To compare the two methods, we evaluated the significance of the difference between test and standard values. For HRDB, FinapBB reproducibly recorded HR (R2 = 0.998), and was significantly (p = 0.001) better than Finapav (R2 = 0.616; p < 0.001). For VR, HRBB generated a VR that was not significantly different from the correct values, while HRav generated a value that was slightly but consistently lower than the correct values (p < 0.001). We conclude that FinapHR reliably records HR variations in the beat-to-beat mode for cardiovascular HR tests.

  15. An effective method for incoherent scattering radar's detecting ability evaluation

    NASA Astrophysics Data System (ADS)

    Lu, Ziqing; Yao, Ming; Deng, Xiaohua

    2016-06-01

    Ionospheric incoherent scatter radar (ISR), which is used to detect ionospheric electrons and ions, generally, has megawatt class transmission power and hundred meter level antenna aperture. The crucial purpose of this detecting technology is to get ionospheric parameters by acquiring the autocorrelation function and power spectrum of the target ionospheric plasma echoes. Whereas the ISR's echoes are very weak because of the small radar cross section of its target, estimating detecting ability will be significantly instructive and meaningful for ISR system design. In this paper, we evaluate the detecting ability through signal-to-noise ratio (SNR). The soft-target radar equation is deduced to be applicable to ISR, through which we use data from International Reference Ionosphere model to simulate signal-to-noise ratio (SNR) of echoes, and then comparing the measured SNR from European Incoherent Scatter Scientific Association and Advanced Modular Incoherent Scatter Radar with the simulation. The simulation results show good consistency with the measured SNR. For ISR, the topic of this paper is the first comparison between the calculated SNR and radar measurements; the detecting ability can be improved through increasing SNR. The effective method for ISR's detecting ability evaluation provides basis for design of radar system.

  16. Study Methods to Characterize and Implement Thermography Nondestructive Evaluation (NDE)

    NASA Technical Reports Server (NTRS)

    Walker, James L.

    1998-01-01

    The limits and conditions under which an infrared thermographic nondestructive evaluation can be utilized to assess the quality of aerospace hardware is demonstrated in this research effort. The primary focus of this work is on applying thermography to the inspection of advanced composite structures such as would be found in the International Space Station Instrumentation Racks, Space Shuttle Cargo Bay Doors, Bantam RP-1 tank or RSRM Nose Cone. Here, the detection of delamination, disbond, inclusion and porosity type defects are of primary interest. In addition to composites, an extensive research effort has been initiated to determine how well a thermographic evaluation can detect leaks and disbonds in pressurized metallic systems "i.e. the Space Shuttle Main Engine Nozzles". In either case, research into developing practical inspection procedures was conducted and thermographic inspections were performed on a myriad of test samples, subscale demonstration articles and "simulated" flight hardware. All test samples were fabricated as close to their respective structural counterparts as possible except with intentional defects for NDE qualification. As an added benefit of this effort to create simulated defects, methods were devised for defect fabrication that may be useful in future NDE qualification ventures.

  17. Gaussian beam profile shaping apparatus, method therefore and evaluation thereof

    DOEpatents

    Dickey, F.M.; Holswade, S.C.; Romero, L.A.

    1999-01-26

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system. 27 figs.

  18. [Methods for postoperative evaluation of complete excision of the mesorectum].

    PubMed

    Sterk, P; Nagel, T; Günter, S; Schubert, F; Klein, P

    2000-01-01

    This study aimed at a more objective evaluation of the specimen after total mesorectal excision [14]. For this reason, a method yielding a simple stained preparation of the totally excised mesorectum was developed. By postoperative injection of 10 ml of an ink solution into the A. rectalis superior of 15 specimens, the arterial mesorectal vascular tree was filled. All specimens had been collected by means of total mesorectal excision. In two specimens, in wich the mesorectal sheath fascia had been injured due to the surgical manipulation, we observed the leakage of ink from the mesorectum even during the injection. In three further specimens, some ink leakage in the form of dots occurred from small opened arterioles after the injection was performed. No ink leakage was observed in the remaining specimens. Prior to the ink injection, thirteen specimens were macroscopically tested and found intact. Three of the fifteen specimens exhibited minor lesions of the mesorectum that would not have been detected macroscopically without ink tagging. The comparison of the findings provided by the surgeon with the histopathological evaluation showed that those specimens in which no ink leakage occurred had an unimpaired mesorectal sheath fascia. These specimens coorespond to the complete excision of the mesorectum and the removal of the tumor in a cancer-sealed package as long as the circumferential rim of the specimen has not been infiltrated by the tumor. PMID:10829318

  19. Gaussian beam profile shaping apparatus, method therefor and evaluation thereof

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.; Romero, Louis A.

    1999-01-01

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system.

  20. Efficient Voronoi volume estimation for DEM simulations of granular materials under confined conditions

    PubMed Central

    Frenning, Göran

    2015-01-01

    When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975