Science.gov

Sample records for dems method evaluation

  1. Robust methods for assessing the accuracy of linear interpolated DEM

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Shi, Wenzhong; Liu, Eryong

    2015-02-01

    Methods for assessing the accuracy of a digital elevation model (DEM) with emphasis on robust methods have been studied in this paper. Based on the squared DEM residual population generated by the bi-linear interpolation method, three average-error statistics including (a) mean, (b) median, and (c) M-estimator are thoroughly investigated for measuring the interpolated DEM accuracy. Correspondingly, their confidence intervals are also constructed for each average error statistic to further evaluate the DEM quality. The first method mainly utilizes the student distribution while the second and third are derived from the robust theories. These innovative robust methods possess the capability of counteracting the outlier effects or even the skew distributed residuals in DEM accuracy assessment. Experimental studies using Monte Carlo simulation have commendably investigated the asymptotic convergence behavior of confidence intervals constructed by these three methods with the increase of sample size. It is demonstrated that the robust methods can produce more reliable DEM accuracy assessment results compared with those by the classical t-distribution-based method. Consequently, these proposed robust methods are strongly recommended for assessing DEM accuracy, particularly for those cases where the DEM residual population is evidently non-normal or heavily contaminated with outliers.

  2. Evaluation of DEM-assisted SAR coregistration

    NASA Astrophysics Data System (ADS)

    Nitti, D. O.; Hanssen, R. F.; Refice, A.; Bovenga, F.; Milillo, G.; Nutricato, R.

    2008-10-01

    Image alignment is without doubt the most crucial step in SAR Interferometry. Interferogram formation requires images to be coregistered with an accuracy of better than 1/8 pixel to avoid significant loss of phase coherence. Conventional interferometric precise coregistration methods for full-resolution SAR data (Single-Look Complex imagery, or SLC) are based on the cross-correlation of the SLC data, either in the original complex form or as squared amplitudes. Offset vectors in slant range and azimuth directions are computed on a large number of windows, according to the estimated correlation peaks. Then, a two-dimensional polynomial of a certain degree is usually chosen as warp function and the polynomial parameters are estimated through LMS fit from the shifts measured on the image windows. In case of rough topography and long baselines, the polynomial approximation for the warp function becomes inaccurate, leading to local misregistrations. Moreover, these effects increase with the spatial resolution and then with the sampling frequency of the sensor, as first results on TerraSAR-X interferometry confirm. An improved, DEM-assisted image coregistration procedure can be adopted for providing higher-order prediction of the offset vectors. Instead of estimating the shifts on a limited number of patches and using a polynomial approximation for the transformation, this approach computes pixel by pixel the correspondence between master and slave by using the orbital data and a reference DEM. This study assesses the performance of this approach with respect to the standard procedure. In particular, both analytical relationships and simulations will evaluate the impact of the finite vertical accuracy of the DEM on the final coregistration precision for different radar postings and relative positions of satellites. The two approaches are compared by processing real data at different carrier frequencies and using the interferometric coherence as quality figure.

  3. Pre-Conditioning Optmization Methods and Display for Mega-Pixel DEM Reconstructions

    NASA Astrophysics Data System (ADS)

    Sette, A. L.; DeLuca, E. E.; Weber, M. A.; Golub, L.

    2004-05-01

    The Atmospheric Imaging Assembly (AIA) for the Solar Dynamics Observatory will provide an unprecedented rate of mega-pixel solar corona data. This hastens the need for faster differential emission measure (DEM) reconstruction methods, as well as scientifically useful ways of displaying this information for mega-pixel datasets. We investigate pre-conditioning methods, which optimize DEM reconstruction by making an informed initial DEM guess that takes advantage of the sharing of DEM information among the pixels in an image. In addition, we evaluate the effectiveness of different DEM image display options, including single temperature emission maps and time-progression DEM movies. This work is supported under contract SP02D4301R to the Lockheed Martin Corp.

  4. Evaluating the Accuracy of dem Generation Algorithms from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Ruiz, J. J.; Diaz-Mas, L.; Perez, F.; Viguria, A.

    2013-08-01

    In this work we evaluated how the use of different positioning systems affects the accuracy of Digital Elevation Models (DEMs) generated from aerial imagery obtained with Unmanned Aerial Vehicles (UAVs). In this domain, state-of-the-art DEM generation algorithms suffer from typical errors obtained by GPS/INS devices in the position measurements associated with each picture obtained. The deviations from these measurements to real world positions are about meters. The experiments have been carried out using a small quadrotor in the indoor testbed at the Center for Advanced Aerospace Technologies (CATEC). This testbed houses a system that is able to track small markers mounted on the UAV and along the scenario with millimeter precision. This provides very precise position measurements, to which we can add random noise to simulate errors in different GPS receivers. The results showed that final DEM accuracy clearly depends on the positioning information.

  5. Evaluating Error of LIDAR Derived dem Interpolation for Vegetation Area

    NASA Astrophysics Data System (ADS)

    Ismail, Z.; Khanan, M. F. Abdul; Omar, F. Z.; Rahman, M. Z. Abdul; Mohd Salleh, M. R.

    2016-09-01

    Light Detection and Ranging or LiDAR data is a data source for deriving digital terrain model while Digital Elevation Model or DEM is usable within Geographical Information System or GIS. The aim of this study is to evaluate the accuracy of LiDAR derived DEM generated based on different interpolation methods and slope classes. Initially, the study area is divided into three slope classes: (a) slope class one (0° - 5°), (b) slope class two (6° - 10°) and (c) slope class three (11° - 15°). Secondly, each slope class is tested using three distinctive interpolation methods: (a) Kriging, (b) Inverse Distance Weighting (IDW) and (c) Spline. Next, accuracy assessment is done based on field survey tachymetry data. The finding reveals that the overall Root Mean Square Error or RMSE for Kriging provided the lowest value of 0.727 m for both 0.5 m and 1 m spatial resolutions of oil palm area, followed by Spline with values of 0.734 m for 0.5 m spatial resolution and 0.747 m for spatial resolution of 1 m. Concurrently, IDW provided the highest RMSE value of 0.784 m for both spatial resolutions of 0.5 and 1 m. For rubber area, Spline provided the lowest RMSE value of 0.746 m for 0.5 m spatial resolution and 0.760 m for 1 m spatial resolution. The highest value of RMSE for rubber area is IDW with the value of 1.061 m for both spatial resolutions. Finally, Kriging gave the RMSE value of 0.790m for both spatial resolutions.

  6. An improved method to represent DEM uncertainty in glacial lake outburst flood propagation using stochastic simulations

    NASA Astrophysics Data System (ADS)

    Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan

    2015-10-01

    Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.

  7. Stochastic Discrete Equation Method (sDEM) for two-phase flows

    SciTech Connect

    Abgrall, R.; Congedo, P.M.; Geraci, G.; Rodio, M.G.

    2015-10-15

    A new scheme for the numerical approximation of a five-equation model taking into account Uncertainty Quantification (UQ) is presented. In particular, the Discrete Equation Method (DEM) for the discretization of the five-equation model is modified for including a formulation based on the adaptive Semi-Intrusive (aSI) scheme, thus yielding a new intrusive scheme (sDEM) for simulating stochastic two-phase flows. Some reference test-cases are performed in order to demonstrate the convergence properties and the efficiency of the overall scheme. The propagation of initial conditions uncertainties is evaluated in terms of mean and variance of several thermodynamic properties of the two phases.

  8. Numerical Simulation of High Velocity Impact Phenomenon by the Distinct Element Method (dem)

    NASA Astrophysics Data System (ADS)

    Tsukahara, Y.; Matsuo, A.; Tanaka, K.

    2007-12-01

    Continuous-DEM (Distinct Element Method) for impact analysis is proposed in this paper. Continuous-DEM is based on DEM (Distinct Element Method) and the idea of the continuum theory. Numerical simulations of impacts between SUS 304 projectile and concrete target has been performed using the proposed method. The results agreed quantitatively with the impedance matching method. Experimental elastic-plastic behavior with compression and rarefaction wave under plate impact was also qualitatively reproduced, matching the result by AUTODYN®.

  9. Evaluation of on-line DEMs for flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Sanders, Brett F.

    2007-08-01

    Recent and highly accurate topographic data should be used for flood inundation modeling, but this is not always feasible given time and budget constraints so the utility of several on-line digital elevation models (DEMs) is examined with a set of steady and unsteady test problems. DEMs are used to parameterize a 2D hydrodynamic flood simulation algorithm and predictions are compared with published flood maps and observed flood conditions. DEMs based on airborne light detection and ranging (LiDAR) are preferred because of horizontal resolution, vertical accuracy (˜0.1 m) and the ability to separate bare-earth from built structures and vegetation. DEMs based on airborne interferometric synthetic aperture radar (IfSAR) have good horizontal resolution but gridded elevations reflect built structures and vegetation and therefore further processing may be required to permit flood modeling. IfSAR and shuttle radar topography mission (SRTM) DEMs suffer from radar speckle, or noise, so flood plains may appear with non-physical relief and predicted flood zones may include non-physical pools. DEMs based on national elevation data (NED) are remarkably smooth in comparison to IfSAR and SRTM but using NED, flood predictions overestimate flood extent in comparison to all other DEMs including LiDAR, the most accurate. This study highlights utility in SRTM as a global source of terrain data for flood modeling.

  10. A New DEM Generalization Method Based on Watershed and Tree Structure

    PubMed Central

    Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang

    2016-01-01

    The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296

  11. A New DEM Generalization Method Based on Watershed and Tree Structure.

    PubMed

    Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang

    2016-01-01

    The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296

  12. Evaluation of DEM generation accuracy from UAS imagery

    NASA Astrophysics Data System (ADS)

    Santise, M.; Fornari, M.; Forlani, G.; Roncella, R.

    2014-06-01

    The growing use of UAS platform for aerial photogrammetry comes with a new family of Computer Vision highly automated processing software expressly built to manage the peculiar characteristics of these blocks of images. It is of interest to photogrammetrist and professionals, therefore, to find out whether the image orientation and DSM generation methods implemented in such software are reliable and the DSMs and orthophotos are accurate. On a more general basis, it is interesting to figure out whether it is still worth applying the standard rules of aerial photogrammetry to the case of drones, achieving the same inner strength and the same accuracies as well. With such goals in mind, a test area has been set up at the University Campus in Parma. A large number of ground points has been measured on natural as well as signalized points, to provide a comprehensive test field, to check the accuracy performance of different UAS systems. In the test area, points both at ground-level and features on the buildings roofs were measured, in order to obtain a distributed support also altimetrically. Control points were set on different types of surfaces (buildings, asphalt, target, fields of grass and bumps); break lines, were also employed. The paper presents the results of a comparison between two different surveys for DEM (Digital Elevation Model) generation, performed at 70 m and 140 m flying height, using a Falcon 8 UAS.

  13. Open-Source Digital Elevation Model (DEMs) Evaluation with GPS and LiDAR Data

    NASA Astrophysics Data System (ADS)

    Khalid, N. F.; Din, A. H. M.; Omar, K. M.; Khanan, M. F. A.; Omar, A. H.; Hamid, A. I. A.; Pa'suya, M. F.

    2016-09-01

    Advanced Spaceborne Thermal Emission and Reflection Radiometer-Global Digital Elevation Model (ASTER GDEM), Shuttle Radar Topography Mission (SRTM), and Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) are freely available Digital Elevation Model (DEM) datasets for environmental modeling and studies. The quality of spatial resolution and vertical accuracy of the DEM data source has a great influence particularly on the accuracy specifically for inundation mapping. Most of the coastal inundation risk studies used the publicly available DEM to estimated the coastal inundation and associated damaged especially to human population based on the increment of sea level. In this study, the comparison between ground truth data from Global Positioning System (GPS) observation and DEM is done to evaluate the accuracy of each DEM. The vertical accuracy of SRTM shows better result against ASTER and GMTED10 with an RMSE of 6.054 m. On top of the accuracy, the correlation of DEM is identified with the high determination of coefficient of 0.912 for SRTM. For coastal zone area, DEMs based on airborne light detection and ranging (LiDAR) dataset was used as ground truth data relating to terrain height. In this case, the LiDAR DEM is compared against the new SRTM DEM after applying the scale factor. From the findings, the accuracy of the new DEM model from SRTM can be improved by applying scale factor. The result clearly shows that the value of RMSE exhibit slightly different when it reached 0.503 m. Hence, this new model is the most suitable and meets the accuracy requirement for coastal inundation risk assessment using open source data. The suitability of these datasets for further analysis on coastal management studies is vital to assess the potentially vulnerable areas caused by coastal inundation.

  14. Development and Evaluation of Simple Measurement System Using the Oblique Photo and dem

    NASA Astrophysics Data System (ADS)

    Nonaka, H.; Sasaki, H.; Fujimaki, S.; Naruke, S.; Kishimoto, H.

    2016-06-01

    When a disaster occurs, we must grasp and evaluate its damage as soon as possible. Then we try to estimate them from some kind of photographs, such as surveillance camera imagery, satellite imagery, photographs taken from a helicopter and so on. Especially in initial stage, estimation of decent damage situation for a short time is more important than investigation of damage situation for a long time. One of the source of damage situation is the image taken by surveillance camera, satellite sensor and helicopter. If we can measure any targets in these imagery, we can estimate a length of a lava flow, a reach of a cinder and a sediment volume in volcanic eruption or landslide. Therefore in order to measure various information for a short time, we developed a simplified measurement system which uses these photographs. This system requires DEM in addition to photographs, but it is possible to use previously acquired DEM. To measure an object, we require only two steps. One is the determination of the position and the posture in which the photograph is shot. We determine these parameters using DEM. The other step is the measurement of an object in photograph. In this paper, we describe this system and show the experimental results to evaluate this system. In this experiment we measured the top of Mt. Usu by using two measurement method of this system. Then we can measure it about one hour and the difference between the measurement results and the airborne LiDAR data are less than 10 meter.

  15. Structural and Volumetric re-evaluation of the Vaiont landslide using DEM techniques

    NASA Astrophysics Data System (ADS)

    Superchi, Laura; Pedrazzini, Andrea; Floris, Mario; Genevois, Rinaldo; Ghirotti, Monica; Jaboyedoff, Michel

    2010-05-01

    On the 9th October 1963 a catastrophic landslide occurred on the southern slope of the Vaiont dam reservoir. A mass of approximately 270 million m3 collapsed into the reservoir generating a wave which overtopped the dam and hit the town of Longarone and other villages: almost 2000 people lost their lives. The large volume and high velocity of the landslide combined with the great destruction and loss of life that occurred make the Vaiont landslide as a natural laboratory to investigate landslide failure mechanisms and propagation. Geological, structural, geomorphological, hydrogeological and geomechanical elements should be, then, re-analyzed using methods and techniques not available in the '60s. In order to better quantify the volume involved in the movement and to assess the mechanism of the failure, a structural study is a preliminary and necessary step. The structural features have been investigated based on a digital elevation model (DEM) of the pre- and post-landslide topography at a pixel size of 5m and associated software (COLTOP-3D) to create a colored shaded relief map revealing the orientation of morphological features. Besides,the results allowed to identify on both pre- and post-slide surface six main discontinuity sets, some of which influence directly the Vaiont landslide morphology. Recent and old field surveys allowed to validate the COLTOP-3D analysis results. To estimate the location and shape of the sliding surface and to evaluate the volume of the landslide, the SLBL (Sloping Local Base Level) method has been used, a simple and efficient tool that allows a geometric interpretation of the failure surface based on a DEM. The SLBL application required a geological interpretation to define the contours of the landslide and to estimate the possible curvature of the sliding surface, that is defined by interpolating between points considered as limits of the landslide. The SLBL surface of the Vaiont landslide, was obtained from the DEM reconstruction

  16. DEM-based Watershed Delineation - Comparison of Different Methods and applications

    NASA Astrophysics Data System (ADS)

    Chu, X.; Zhang, J.; Tahmasebi Nasab, M.

    2015-12-01

    Digital elevation models (DEMs) are commonly used for large-scale watershed hydrologic and water quality modeling. With aid of the latest LiDAR technology, submeter scale DEM data are often available for many areas in the United States. Precise characterization of the detailed variations in surface microtopography using such high-resolution DEMs is crucial to the related watershed modeling. Various methods have been developed to delineate a watershed, including determination of flow directions and accumulations, identification of subbasin boundaries, and calculation of the relevant topographic parameters. The objective of this study is to examine different DEM-based watershed delineation methods by comparing their unique features and the discrepancies in their results. Not only does this study cover the traditional watershed delineation methods, but also a new puddle-based unit (PBU) delineation method. The specific topics and issues to be presented involve flow directions (D8 single flow direction vs. multi-direction methods), segmentation of stream channels, drainage systems (single "depressionless" drainage network vs. hierarchical depression-dominated drainage system), and hydrologic connectivity (static structural connectivity vs. dynamic functional connectivity). A variety of real topographic surfaces are selected and delineated by using the selected methods. Comparisons of their delineation results emphasize the importance of selection of the methods and highlight their applicability and potential impacts on watershed modeling.

  17. A Review of Discrete Element Method (DEM) Particle Shapes and Size Distributions for Lunar Soil

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Metzger, Philip T.; Wilkinson, R. Allen

    2010-01-01

    As part of ongoing efforts to develop models of lunar soil mechanics, this report reviews two topics that are important to discrete element method (DEM) modeling the behavior of soils (such as lunar soils): (1) methods of modeling particle shapes and (2) analytical representations of particle size distribution. The choice of particle shape complexity is driven primarily by opposing tradeoffs with total number of particles, computer memory, and total simulation computer processing time. The choice is also dependent on available DEM software capabilities. For example, PFC2D/PFC3D and EDEM support clustering of spheres; MIMES incorporates superquadric particle shapes; and BLOKS3D provides polyhedra shapes. Most commercial and custom DEM software supports some type of complex particle shape beyond the standard sphere. Convex polyhedra, clusters of spheres and single parametric particle shapes such as the ellipsoid, polyellipsoid, and superquadric, are all motivated by the desire to introduce asymmetry into the particle shape, as well as edges and corners, in order to better simulate actual granular particle shapes and behavior. An empirical particle size distribution (PSD) formula is shown to fit desert sand data from Bagnold. Particle size data of JSC-1a obtained from a fine particle analyzer at the NASA Kennedy Space Center is also fitted to a similar empirical PSD function.

  18. Dem Extraction from WORLDVIEW-3 Stereo-Images and Accuracy Evaluation

    NASA Astrophysics Data System (ADS)

    Hu, F.; Gao, X. M.; Li, G. Y.; Li, M.

    2016-06-01

    This paper validates the potentials of Worldview-3 satellite images in large scale topographic mapping, by choosing Worldview-3 along-track stereo-images of Yi Mountain area in Shandong province China for DEM extraction and accuracy evaluation. Firstly, eighteen accurate and evenly-distributed GPS points are collected in field and used as GCPs/check points, the image points of which are accurately measured, and also tie points are extracted from image matching; then, the RFM-based block adjustment to compensate the systematic error in image orientation is carried out and the geo-positioning accuracy is calculated and analysed; next, for the two stereo-pairs of the block, DSMs are separately constructed and mosaicked as an entirety, and also the corresponding DEM is subsequently generated; finally, compared with the selected check points from high-precision airborne LiDAR point cloud covering the same test area, the accuracy of the generated DEM with 2-meter grid spacing is evaluated by the maximum (max.), minimum (min.), mean and standard deviation (std.) values of elevation biases. It is demonstrated that, for Worldview-3 stereo-images used in our research, the planimetric accuracy without GCPs is about 2.16 m (mean error) and 0.55 (std. error), which is superior to the nominal value, while the vertical accuracy is about -1.61 m (mean error) and 0.49 m (std. error); with a small amount of GCPs located in the center and four corners of the test area, the systematic error can be well compensated. The std. value of elevation biases between the generated DEM and the 7256 LiDAR check points are about 0.62 m. If considering the potential uncertainties in the image point measurement, stereo matching and also elevation editing, the accuracy of generating DEM from Worldview-3 stereo-images should be more desirable. Judging from the results, Worldview-3 has the potential for 1:5000 or even larger scale mapping application.

  19. High-resolution Pleiades DEMs and improved mapping methods for the E-Corinth marine terraces

    NASA Astrophysics Data System (ADS)

    de Gelder, Giovanni; Fernández-Blanco, David; Delorme, Arthur; Jara-Muñoz, Julius; Melnick, Daniel; Lacassin, Robin; Armijo, Rolando

    2016-04-01

    The newest generation of satellite imagery provides exciting new possibilities for highly detailed mapping, with ground resolution of sub-metric pixels and absolute accuracy within a few meters. This opens new venues for the analysis of geologic and geomorphic landscape features, especially since photogrammetric methods allow the extraction of detailed topographic information from these satellite images. We used tri-stereo imagery from the Pleiades platform of the CNES in combination with Euclidium software for image orientation, and Micmac software for dense matching, to develop state-of-the-art, 2m-resolution digital elevation models (DEMs) for eight areas in Greece. Here, we present our mapping results for an area in the eastern Gulf of Corinth, which contains one of the most extensive and well-preserved flights of marine terraces world-wide. The spatial extent of the terraces has been determined by an iterative combination of an automated surface classification model for terrain slope and roughness, and qualitative assessment of satellite imagery, DEM hillshade maps, slope maps, as well as detailed topographic analyses of profiles and contours. We determined marine terrace shoreline angles by means of swath profiles that run perpendicularly to the paleo-seacliffs, using the graphical interface TerraceM. Our analysis provided us with a minimum and maximum estimate of the paleoshoreline location on ~750 swath profiles, by using the present-day cliff slope as an approximation for its paleo-cliff counterpart. After correlating the marine terraces laterally we obtained 16 different terrace-levels, recording Quaternary sea-level highstands of both major interglacial and several interstadial periods. Our high-resolution Pleiades-DEMs and improved method for paleoshoreline determination allowed us to produce a marine terrace map of unprecedented detail, containing more terrace sub-levels than hitherto. Our mapping demonstrates that we are no longer limited by the

  20. Sensitivity of watershed attributes to spatial resolution and interpolation method of LiDAR DEMs in three distinct landscapes

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.; Jamieson, R.; Sterling, S.

    2014-03-01

    This study investigates scaling relationships of watershed area and stream networks delineated from LiDAR DEMs. The delineations are tested against spatial resolution, including 1, 5, 10, 25, and 50 m, and interpolation method, including Inverse Distance Weighting (IDW), Moving Average (MA), Universal Kriging (UK), Natural Neighbor (NN), and Triangular Irregular Networks (TIN). Study sites include Mosquito Creek, Scotty Creek, and Thomas Brook, representing landscapes with high, low, and moderate change in elevation, respectively. Results show scale-dependent irregularities in watershed area due to spatial resolution at Thomas Brook and Mosquito Creek. The highest sensitivity of watershed area to spatial resolution occurred at Scotty Creek, due to high incidence of LiDAR sensor measurement error and subtle changes in elevation. Length of drainage networks did not show a scaling relationship with spatial resolution, due to algorithmic complications of the stream initiation threshold. Stream lengths of main channels at Thomas Brook and Mosquito Creek displayed systematic increases in length with increasing spatial resolution, described through an average fractal dimension of 1.059. The scaling relationship between stream length and DEM resolution allows estimation of stream lengths from low-resolution DEMs in the absence of high-resolution DEMs. Single stream validation at Thomas Brook showed the 1 m DEM produced the lowest length error and highest spatial accuracy, at 3.7% and 71.3%, respectively. Single stream validation at Mosquito Creek showed the 25 m DEM produced the lowest length error, and the 1 m DEM the highest spatial accuracy, at 0.6% and 61.0%, respectively.

  1. Mechanical behavior modeling of sand-rubber chips mixtures using discrete element method (DEM)

    NASA Astrophysics Data System (ADS)

    Eidgahee, Danial Rezazadeh; Hosseininia, Ehsan Seyedi

    2013-06-01

    Rubber shreds in mixture with sandy soils are widely used in geotechnical purposes due to their specific controlled compressibility characteristics and light weight. Various studies have been carried out for sand or rubber chips content in order to restrain the compressibility of the mass in different structures such as backfills, road embankments, etc. Considering different rubber contents, sand-rubber mixtures can be made which lead mechanical properties of the blend to go through changes. The aim of this paper is to study the effect of adding different rubber portions on the global engineering properties of the mixtures. This study is performed by using Discrete Element Method (DEM). The simulations showed that adding rubber up to a particular fraction can improve maximum bearing stress characteristics comparing to sand alone masses. Taking the difference between sand and rubber stiffness into account, the result interpretation can be developed to other soft and rigid particle mixtures such as powders or polymers.

  2. Evaluating SRTM and ASTER DEM accuracy for the broader area of Sparti, Greece

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.; Tsombos, Panagiotis I.; Zervakou, Alexandra

    2007-10-01

    One of the major projects of the Institute of Geology & Mineral Exploration (IGME) is called "Urban Geology". In the frame of that project there is need for a high accuracy DEM covering the whole country. The DEM should be used for the orthorectification of high resolution images and other applications such as slope map creation, environmental planning et.c. ASTER and SRTM are two possible sources for DEM covering the whole country. According to the specifications the ASTER vertical accuracy of DEM is about 20m with 95% confidence while the horizontal geolocation accuracy appears to be better than 50 m. More recent studies have shown that the use of GCP's resulted in a plannimetric accuracy of 15 m and in a near pixel size vertical accuracy. The Shuttle Radar Topography Mission (SRTM), used an Interferometric Synthetic Aperture Radar (IFSAR) instrument to produce a near-global digital elevation map of the earth's land surface with 16 m absolute vertical height accuracy at 30 meter postings. An SRTM 3-arc-second product (90m resolution) is available for the entire world. In this paper we examine the accuracy of SRTM and ASTER DEMs in comparison to the accuracy of the 1/5.000 topographic maps. The area of study is the broader area of Sparti, Greece. After a first control for random or systematic errors a statistical analysis was done. A DEM derived from digitized contours of the 1:5.000 topographic maps was created and compared with ASTER and SRTM derived DEMs. Fifty-five points of known elevation have been used to estimate the accuracy of these three DEMs. Slope and aspect maps were created and compared. The elevation difference between the three DEMs was calculated. 2D RMSE, correlation and the percentile value were also computed. The three DEMs were used for the orthorectification of very high resolution data and the final orthophotos were compared.

  3. Evaluation of TanDEM-X elevation data for geomorphological mapping and interpretation in high mountain environments - A case study from SE Tibet, China

    NASA Astrophysics Data System (ADS)

    Pipaud, Isabel; Loibl, David; Lehmkuhl, Frank

    2015-10-01

    Digital elevation models (DEMs) are a prerequisite for many different applications in the field of geomorphology. In this context, the two near-global medium resolution DEMs originating from the SRTM and ASTER missions are widely used. For detailed geomorphological studies, particularly in high mountain environments, these datasets are, however, known to have substantial disadvantages beyond their posting, i.e., data gaps and miscellaneous artifacts. The upcoming TanDEM-X DEM is a promising candidate to improve this situation by application of state-of-the-art radar technology, exhibiting a posting of 12 m and less proneness to errors. In this study, we present a DEM processed from a single TanDEM-X CoSSC scene, covering a study area in the extreme relief of the eastern Nyainqêntanglha Range, southeastern Tibet. The potential of the resulting experimental TanDEM-X DEM for geomorphological applications was evaluated by geomorphometric analyses and an assessment of landform cognoscibility and artifacts in comparison to the ASTER GDEM and the recently released SRTM 1″ DEM. Detailed geomorphological mapping was conducted for four selected core study areas in a manual approach, based exclusively on the TanDEM-X DEM and its basic derivates. The results show that the self-processed TanDEM-X DEM yields a detailed and widely consistent landscape representation. It thus fosters geomorphological analysis by visual and quantitative means, allowing delineation of landforms down to footprints of ~ 30 m. Even in this premature state, the TanDEM-X elevation data are widely superior to the ASTER and SRTM datasets, primarily owing to its significantly higher resolution and its lower susceptibility to artifacts that hamper landform interpretation. Conversely, challenges toward interferometric DEM generation were identified, including (i) triangulation facets and missing topographic information resulting from radar layover on steep slopes facing toward the radar sensor, (ii) low

  4. Development of a coupled discrete element (DEM)-smoothed particle hydrodynamics (SPH) simulation method for polyhedral particles

    NASA Astrophysics Data System (ADS)

    Nassauer, Benjamin; Liedke, Thomas; Kuna, Meinhard

    2016-03-01

    In the present paper, the direct coupling of a discrete element method (DEM) with polyhedral particles and smoothed particle hydrodynamics (SPH) is presented. The two simulation techniques are fully coupled in both ways through interaction forces between the solid DEM particles and the fluid SPH particles. Thus this simulation method provides the possibility to simulate the individual movement of polyhedral, sharp-edged particles as well as the flow field around these particles in fluid-saturated granular matter which occurs in many technical processes e.g. wire sawing, grinding or lapping. The coupled method is exemplified and validated by the simulation of a particle in a shear flow, which shows good agreement with analytical solutions.

  5. The Discrete Equation Method (DEM) for Fully Compressible Two-Phase Flows in Ducts of Spatially Varying Cross-Section

    SciTech Connect

    Ray A. Berry; Richard Saurel; Tamara Grimmett

    2009-07-01

    Typically, multiphase modeling begins with an averaged (or homogenized) system of partial differential equations (traditionally ill-posed) then discretizes this system to form a numerical scheme. Assuming that the ill-posedness problem is avoided by using a well-posed formulation such as the seven-equation model, this presents problems for the numerical approximation of non-conservative terms at discontinuities (interfaces, shocks) as well as unwieldy treatment of fluxes with seven waves. To solve interface problems without conservation errors and to avoid this questionable determination of average variables and the numerical approximation of the non-conservative terms associated with 2 velocity mixture flows we employ a new homogenization method known as the Discrete Equations Method (DEM). Contrary to conventional methods, the averaged equations for the mixture are not used, and this method directly obtains a (well-posed) discrete equation system from the single-phase system to produce a numerical scheme which accurately computes fluxes for arbitrary numbers of phases and solves non-conservative products. The method effectively uses a sequence of single phase Riemann equation solves. Phase interactions are accounted for by Riemann solvers at each interface. Flow topology can change with changing expressions for the fluxes. Non-conservative terms are correctly approximated. Some of the closure relations missing from the traditional approach are automatically obtained. Lastly, we can often times identify the continuous equation system, resulting from taking the continuous limit with weak wave assumptions, of the discrete equations. This can be very useful from a theoretical standpoint. As a first step toward implict integration of the DEM method in multidimensions, in this paper we construct a DEM model for the flow of two compressible phases in 1-D ducts of spatially varying cross-section to test this approach. To relieve time step size restrictions due to

  6. The influence of accuracy, grid size, and interpolation method on the hydrological analysis of LiDAR derived dems: Seneca Nation of Indians, Irving NY

    NASA Astrophysics Data System (ADS)

    Clarkson, Brian W.

    Light Detection and Ranging (LiDAR) derived Digital Elevation Models (DEMs) provide accurate, high resolution digital surfaces for precise topographic analysis. The following study investigates the accuracy of LiDAR derived DEMs by calculating the Root Mean Square Error (RMSE) of multiple interpolation methods with grid cells ranging from 0.5 to 10-meters. A raster cell with smaller dimensions will drastically increase the amount of detail represented in the DEM by increasing the number of elevation values across the study area. Increased horizontal resolutions have raised the accuracy of the interpolated surfaces and the contours generated from the digitized landscapes. As the raster grid cells decrease in size, the level of detail of hydrological processes will significantly improve compared to coarser resolutions including the publicly available National Elevation Datasets (NEDs). Utilizing a LiDAR derived DEM with the lowest RMSE as the 'ground truth', watershed boundaries were delineated for a sub-basin of the Clear Creek Watershed within the territory of the Seneca Nation of Indians located in Southern Erie County, NY. An investigation of the watershed area and boundary location revealed considerable differences comparing the results of applying different interpretation methods on DEM datasets of different horizontal resolutions. Stream networks coupled with watersheds were used to calculate peak flow values for the 10-meter NEDs and LiDAR derived DEMs.

  7. Effective Thermal Property Estimation of Unitary Pebble Beds Based on a CFD-DEM Coupled Method for a Fusion Blanket

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Chen, Youhua; Huang, Kai; Liu, Songlin

    2015-12-01

    Lithium ceramic pebble beds have been considered in the solid blanket design for fusion reactors. To characterize the fusion solid blanket thermal performance, studies of the effective thermal properties, i.e. the effective thermal conductivity and heat transfer coefficient, of the pebble beds are necessary. In this paper, a 3D computational fluid dynamics discrete element method (CFD-DEM) coupled numerical model was proposed to simulate heat transfer and thereby estimate the effective thermal properties. The DEM was applied to produce a geometric topology of a prototypical blanket pebble bed by directly simulating the contact state of each individual particle using basic interaction laws. Based on this geometric topology, a CFD model was built to analyze the temperature distribution and obtain the effective thermal properties. The current numerical model was shown to be in good agreement with the existing experimental data for effective thermal conductivity available in the literature. supported by National Special Project for Magnetic Confined Nuclear Fusion Energy of China (Nos. 2013GB108004, 2015GB108002, 2014GB122000 and 2014GB119000), and National Natural Science Foundation of China (No. 11175207)

  8. ASTER DEM performance

    USGS Publications Warehouse

    Fujisada, H.; Bailey, G.B.; Kelly, Glen G.; Hara, S.; Abrams, M.J.

    2005-01-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the National Aeronautics and Space Administration's Terra spacecraft has an along-track stereoscopic capability using its a near-infrared spectral band to acquire the stereo data. ASTER has two telescopes, one for nadir-viewing and another for backward-viewing, with a base-to-height ratio of 0.6. The spatial resolution is 15 m in the horizontal plane. Parameters such as the line-of-sight vectors and the pointing axis were adjusted during the initial operation period to generate Level-1 data products with a high-quality stereo system performance. The evaluation of the digital elevation model (DEM) data was carried out both by Japanese and U.S. science teams separately using different DEM generation software and reference databases. The vertical accuracy of the DEM data generated from the Level-1A data is 20 m with 95% confidence without ground control point (GCP) correction for individual scenes. Geolocation accuracy that is important for the DEM datasets is better than 50 m. This appears to be limited by the spacecraft position accuracy. In addition, a slight increase in accuracy is observed by using GCPs to generate the stereo data. ?? 2005 IEEE.

  9. A hierarchical pyramid method for managing large-scale high-resolution drainage networks extracted from DEM

    NASA Astrophysics Data System (ADS)

    Bai, Rui; Tiejian, Li; Huang, Yuefei; Jiaye, Li; Wang, Guangqian; Yin, Dongqin

    2015-12-01

    The increasing resolution of Digital Elevation Models (DEMs) and the development of drainage network extraction algorithms make it possible to develop high-resolution drainage networks for large river basins. These vector networks contain massive numbers of river reaches with associated geographical features, including topological connections and topographical parameters. These features create challenges for efficient map display and data management. Of particular interest are the requirements of data management for multi-scale hydrological simulations using multi-resolution river networks. In this paper, a hierarchical pyramid method is proposed, which generates coarsened vector drainage networks from the originals iteratively. The method is based on the Horton-Strahler's (H-S) order schema. At each coarsening step, the river reaches with the lowest H-S order are pruned, and their related sub-basins are merged. At the same time, the topological connections and topographical parameters of each coarsened drainage network are inherited from the former level using formulas that are presented in this study. The method was applied to the original drainage networks of a watershed in the Huangfuchuan River basin extracted from a 1-m-resolution airborne LiDAR DEM and applied to the full Yangtze River basin in China, which was extracted from a 30-m-resolution ASTER GDEM. In addition, a map-display and parameter-query web service was published for the Mississippi River basin, and its data were extracted from the 30-m-resolution ASTER GDEM. The results presented in this study indicate that the developed method can effectively manage and display massive amounts of drainage network data and can facilitate multi-scale hydrological simulations.

  10. Evaluating DEM conditioning techniques, elevation source data, and grid resolution for field-scale hydrological parameter extraction

    NASA Astrophysics Data System (ADS)

    Woodrow, Kathryn; Lindsay, John B.; Berg, Aaron A.

    2016-09-01

    Although digital elevation models (DEMs) prove useful for a number of hydrological applications, they are often the end result of numerous processing steps that each contains uncertainty. These uncertainties have the potential to greatly influence DEM quality and to further propagate to DEM-derived attributes including derived surface and near-surface drainage patterns. This research examines the impacts of DEM grid resolution, elevation source data, and conditioning techniques on the spatial and statistical distribution of field-scale hydrological attributes for a 12,000 ha watershed of an agricultural area within southwestern Ontario, Canada. Three conditioning techniques, including depression filling (DF), depression breaching (DB), and stream burning (SB), were examined. The catchments draining to each boundary of 7933 agricultural fields were delineated using the surface drainage patterns modeled from LiDAR data, interpolated to a 1 m, 5 m, and 10 m resolution DEMs, and from a 10 m resolution photogrammetric DEM. The results showed that variation in DEM grid resolution resulted in significant differences in the spatial and statistical distributions of contributing areas and the distributions of downslope flowpath length. Degrading the grid resolution of the LiDAR data from 1 m to 10 m resulted in a disagreement in mapped contributing areas of between 29.4% and 37.3% of the study area, depending on the DEM conditioning technique. The disagreements among the field-scale contributing areas mapped from the 10 m LiDAR DEM and photogrammetric DEM were large, with nearly half of the study area draining to alternate field boundaries. Differences in derived contributing areas and flowpaths among various conditioning techniques increased substantially at finer grid resolutions, with the largest disagreement among mapped contributing areas occurring between the 1 m resolution DB DEM and the SB DEM (37% disagreement) and the DB-DF comparison (36.5% disagreement in mapped

  11. The evaluation of unmanned aerial system-based photogrammetry and terrestrial laser scanning to generate DEMs of agricultural watersheds

    NASA Astrophysics Data System (ADS)

    Ouédraogo, Mohamar Moussa; Degré, Aurore; Debouche, Charles; Lisein, Jonathan

    2014-06-01

    Agricultural watersheds tend to be places of intensive farming activities that permanently modify their microtopography. The surface characteristics of the soil vary depending on the crops that are cultivated in these areas. Agricultural soil microtopography plays an important role in the quantification of runoff and sediment transport because the presence of crops, crop residues, furrows and ridges may impact the direction of water flow. To better assess such phenomena, 3-D reconstructions of high-resolution agricultural watershed topography are essential. Fine-resolution topographic data collection technologies can be used to discern highly detailed elevation variability in these areas. Knowledge of the strengths and weaknesses of existing technologies used for data collection on agricultural watersheds may be helpful in choosing an appropriate technology. This study assesses the suitability of terrestrial laser scanning (TLS) and unmanned aerial system (UAS) photogrammetry for collecting the fine-resolution topographic data required to generate accurate, high-resolution digital elevation models (DEMs) in a small watershed area (12 ha). Because of farming activity, 14 TLS scans (≈ 25 points m- 2) were collected without using high-definition surveying (HDS) targets, which are generally used to mesh adjacent scans. To evaluate the accuracy of the DEMs created from the TLS scan data, 1098 ground control points (GCPs) were surveyed using a real time kinematic global positioning system (RTK-GPS). Linear regressions were then applied to each DEM to remove vertical errors from the TLS point elevations, errors caused by the non-perpendicularity of the scanner's vertical axis to the local horizontal plane, and errors correlated with the distance to the scanner's position. The scans were then meshed to generate a DEMTLS with a 1 × 1 m spatial resolution. The Agisoft PhotoScan and MicMac software packages were used to process the aerial photographs and generate a DEMPSC

  12. Numerical slope stability simulations of chasma walls in Valles Marineris/Mars using a distinct element method (dem).

    NASA Astrophysics Data System (ADS)

    Imre, B.

    2003-04-01

    NUMERICAL SLOPE STABILITY SIMULATIONS OF CHASMA WALLS IN VALLES MARINERIS/MARS USING A DISTINCT ELEMENT METHOD (DEM). B. Imre (1) (1) German Aerospace Center, Berlin Adlershof, bernd.imre@gmx.net The 8- to 10-km depths of Valles Marineris (VM) offer excellent views into the upper Martian crust. Layering, fracturing, lithology, stratigraphy and the content of volatiles have influenced the evolution of the Valles Marineris wallslopes. But these parameters also reflect the development of VM and its wall slopes. The scope of this work is to gain understanding in these parameters by back-simulating the development of wall slopes. For that purpose, the two dimensional Particle Flow Code PFC2D has been chosen (ITASCA, version 2.00-103). PFC2D is a distinct element code for numerical modelling of movements and interactions of assemblies of arbitrarily sized circular particles. Particles may be bonded together to represent a solid material. Movements of particles are unlimited. That is of importance because results of open systems with numerous unknown variables are non-unique and therefore highly path dependent. This DEM allows the simulation of whole development paths of VM walls what makes confirmation of the model more complete (e.g. Oreskes et al., Science 263, 1994). To reduce the number of unknown variables a proper (that means as simple as possible) field-site had to be selected. The northern wall of eastern Candor Chasma has been chosen. This wall is up to 8-km high and represents a significant outcrop of the upper Martian crust. It is quite uncomplex, well-aligned and of simple morphology. Currently the work on the model is at the stage of performing the parameter study. Results will be presented via poster by the EGS-Meeting.

  13. A Comparison of Elevation Between InSAR DEM and Reference DEMs

    NASA Astrophysics Data System (ADS)

    Yun, Ye; Zeng, Qiming; Jiao, Jian; Yan, Dapeng; Liang, Cunren; Wang, Qing; Zhou, Xiao

    2013-01-01

    Introduction (1) DEM generation Space borne SAR interferometry is one of the methods for the generation of digital elevation model (DEM). (2) Common methods to generate DEMs • Same antenna with two passes: e.g. ERS1/2 • Single-pass interferometry : e.g. SRTM • Geometry of stereopairs : e.g. SPOT and ASTER • Combination of air-photograph, satellite image, topographic map and field measurement : e.g. NGCC (National Geomatics Center of China, which has completed the establishment of 1:50000 topographic databases of China) (3) Purpose of this study Compare DEMs derived from ERS1/2 and common methods by comparison of tandem and reference DEMs which are SRTM DEM, ASTER GDEM and NGCC DEM. Some qualitative and quantitative assessments of the elevation were used to estimate the difference.

  14. An efficient and comprehensive method for drainage network extraction from DEM with billions of pixels using a size-balanced binary search tree

    NASA Astrophysics Data System (ADS)

    Bai, Rui; Li, Tiejian; Huang, Yuefei; Li, Jiaye; Wang, Guangqian

    2015-06-01

    With the increasing resolution of digital elevation models (DEMs), computational efficiency problems have been encountered when extracting the drainage network of a large river basin at billion-pixel scales. The efficiency of the most time-consuming depression-filling pretreatment has been improved by using the O(NlogN) complexity least-cost path search method, but the complete extraction steps following this method have not been proposed and tested. In this paper, an improved O(NlogN) algorithm was proposed by introducing a size-balanced binary search tree (BST) to improve the efficiency of the depression-filling pretreatment further. The following extraction steps, including the flow direction determination and the upslope area accumulation, were also redesigned to benefit from this improvement. Therefore, an efficient and comprehensive method was developed. The method was tested to extract drainage networks of 31 river basins with areas greater than 500,000 km2 from the 30-m-resolution ASTER GDEM and two sub-basins with areas of approximately 1000 km2 from the 1-m-resolution airborne LiDAR DEM. Complete drainage networks with both vector features and topographic parameters were obtained with time consumptions in O(NlogN) complexity. The results indicate that the developed method can be used to extract entire drainage networks from DEMs with billions of pixels with high efficiency.

  15. The role of method of production and resolution of the DEM on slope-units delineation for landslide susceptibility assessment - Ubaye Valley, French Alps case study

    NASA Astrophysics Data System (ADS)

    Schlögel, Romy; Marchesini, Ivan; Alvioli, Massimiliano; Reichenbach, Paola; Rossi, Mauro; Malet, Jean-Philippe

    2016-04-01

    Landslide susceptibility assessment forms the basis of any hazard mapping, which is one of the essential parts of quantitative risk mapping. For the same study area, different susceptibility maps can be achieved depending on the type of susceptibility mapping methods, mapping unit, and scale. In the Ubaye Valley (South French Alps), we investigate the effect of resolution and method of production of the DEM to delineate slope units for landslide susceptibility mapping method. Slope units delineation has been processed using multiple combinations of circular variance and minimum area size values, which are the input parameters for a new software for terrain partitioning. We rely on this method taking into account homogeneity of aspect direction inside each unit and inhomogeneity between different units. We computed slope units delineation for 5, 10 and 25 meters resolution DEM, and investigate statistical distributions of morphometric variables within the different polygons. Then, for each different slope units partitioning, we calibrated a landslide susceptibility model, considering landslide bodies and scarps as a dependent variable (binary response). This work aims to analyse the role of DEM resolution on slope-units delineation for landslide susceptibility assessment. Area Under the Curve of the Receiver Operating Characteristic is investigated for the susceptibility model calculations. In addition, we analysed further the performance of the Logistic Regression Model by looking at the percentage of significant variable in the statistical analyses. Results show that smaller slope units have a better chance of containing a smaller number of thematic and morphometric variables, allowing for an easier classification. Reliability of the models according to the DEM resolution considered as well as scarp area and landslides bodies presence/absence as dependent variable are discussed.

  16. On the investigation of the performances of a DEM-based hydrogeomorphic floodplain identification method in a large urbanized river basin: the Tiber river case study in Italy

    NASA Astrophysics Data System (ADS)

    Nardi, Fernando; Biscarini, Chiara; Di Francesco, Silvia; Manciola, Piergiorgio

    2013-04-01

    consequently identified as those river buffers, draining towards the channel, with an elevation that is less than the maximum flow depth of the corresponding outlet. Keeping in mind that this hydrogeomorhic model performances are strictly related to the quality and properties of the input DEM and that the intent of this kind of methodology is not to substitute standard flood modeling and mapping methods, in this work the performances of this approach are qualitatively evaluated by comparing results with standard flood maps. The Tiber river basin was selected as case study, one of the main river basins in Italy covering a drainage area of approximately 17.000 km2. This comparison is interesting for understanding the performance of the model in a large and complex domain where the impact of the urbanization matrix is significant. Results of this investigation confirm the potential of such DEM-based floodplain mapping models for providing a fast timely homogeneous and continuous inundation scenario to urban planners and decision makers, but also the drawbacks of using such methodology where the humans are significantly and rapidly modifying the surface properties.

  17. Shading-based DEM refinement under a comprehensive imaging model

    NASA Astrophysics Data System (ADS)

    Peng, Jianwei; Zhang, Yi; Shan, Jie

    2015-12-01

    This paper introduces an approach to refine coarse digital elevation models (DEMs) based on the shape-from-shading (SfS) technique using a single image. Different from previous studies, this approach is designed for heterogeneous terrain and derived from a comprehensive (extended) imaging model accounting for the combined effect of atmosphere, reflectance, and shading. To solve this intrinsic ill-posed problem, the least squares method and a subsequent optimization procedure are applied in this approach to estimate the shading component, from which the terrain gradient is recovered with a modified optimization method. Integrating the resultant gradients then yields a refined DEM at the same resolution as the input image. The proposed SfS method is evaluated using 30 m Landsat-8 OLI multispectral images and 30 m SRTM DEMs. As demonstrated in this paper, the proposed approach is able to reproduce terrain structures with a higher fidelity; and at medium to large up-scale ratios, can achieve elevation accuracy 20-30% better than the conventional interpolation methods. Further, this property is shown to be stable and independent of topographic complexity. With the ever-increasing public availability of satellite images and DEMs, the developed technique is meaningful for global or local DEM product refinement.

  18. Extract relevant features from DEM for groundwater potential mapping

    NASA Astrophysics Data System (ADS)

    Liu, T.; Yan, H.; Zhai, L.

    2015-06-01

    Multi-criteria evaluation (MCE) method has been applied much in groundwater potential mapping researches. But when to data scarce areas, it will encounter lots of problems due to limited data. Digital Elevation Model (DEM) is the digital representations of the topography, and has many applications in various fields. Former researches had been approved that much information concerned to groundwater potential mapping (such as geological features, terrain features, hydrology features, etc.) can be extracted from DEM data. This made using DEM data for groundwater potential mapping is feasible. In this research, one of the most widely used and also easy to access data in GIS, DEM data was used to extract information for groundwater potential mapping in batter river basin in Alberta, Canada. First five determining factors for potential ground water mapping were put forward based on previous studies (lineaments and lineament density, drainage networks and its density, topographic wetness index (TWI), relief and convergence Index (CI)). Extraction methods of the five determining factors from DEM were put forward and thematic maps were produced accordingly. Cumulative effects matrix was used for weight assignment, a multi-criteria evaluation process was carried out by ArcGIS software to delineate the potential groundwater map. The final groundwater potential map was divided into five categories, viz., non-potential, poor, moderate, good, and excellent zones. Eventually, the success rate curve was drawn and the area under curve (AUC) was figured out for validation. Validation result showed that the success rate of the model was 79% and approved the method's feasibility. The method afforded a new way for researches on groundwater management in areas suffers from data scarcity, and also broaden the application area of DEM data.

  19. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  20. Statistic Tests Aided Multi-Source dem Fusion

    NASA Astrophysics Data System (ADS)

    Fu, C. Y.; Tsay, J. R.

    2016-06-01

    Since the land surface has been changing naturally or manually, DEMs have to be updated continually to satisfy applications using the latest DEM at present. However, the cost of wide-area DEM production is too high. DEMs, which cover the same area but have different quality, grid sizes, generation time or production methods, are called as multi-source DEMs. It provides a solution to fuse multi-source DEMs for low cost DEM updating. The coverage of DEM has to be classified according to slope and visibility in advance, because the precisions of DEM grid points in different areas with different slopes and visibilities are not the same. Next, difference DEM (dDEM) is computed by subtracting two DEMs. It is assumed that dDEM, which only contains random error, obeys normal distribution. Therefore, student test is implemented for blunder detection and three kinds of rejected grid points are generated. First kind of rejected grid points is blunder points and has to be eliminated. Another one is the ones in change areas, where the latest data are regarded as their fusion result. Moreover, the DEM grid points of type I error are correct data and have to be reserved for fusion. The experiment result shows that using DEMs with terrain classification can obtain better blunder detection result. A proper setting of significant levels (α) can detect real blunders without creating too many type I errors. Weighting averaging is chosen as DEM fusion algorithm. The priori precisions estimated by our national DEM production guideline are applied to define weights. Fisher's test is implemented to prove that the priori precisions correspond to the RMSEs of blunder detection result.

  1. Morphometric evaluation of the Afşin-Elbistan lignite basin using kernel density estimation and Getis-Ord's statistics of DEM derived indices, SE Turkey

    NASA Astrophysics Data System (ADS)

    Sarp, Gulcan; Duzgun, Sebnem

    2015-11-01

    A morphometric analysis of river network, basins and relief using geomorphic indices and geostatistical analyses of Digital Elevation Model (DEM) are useful tools for discussing the morphometric evolution of the basin area. In this study, three different indices including valley floor width to height ratio (Vf), stream gradient (SL), and stream sinuosity were applied to Afşin-Elbistan lignite basin to test the imprints of tectonic activity. Perturbations of these indices are usually indicative of differences in the resistance of outcropping lithological units to erosion and active faulting. To map the clusters of high and low indices values, the Kernel density estimation (K) and the Getis-Ord Gi∗ statistics were applied to the DEM-derived indices. The K method and Gi∗ statistic highlighting hot spots and cold spots of the SL index, the stream sinuosity and the Vf index values helped to identify the relative tectonic activity of the basin area. The results indicated that the estimation by the K and Gi∗ including three conceptualization of spatial relationships (CSR) for hot spots (percent volume contours 50 and 95 categorized as high and low respectively) yielded almost similar results in regions of high tectonic activity and low tectonic activity. According to the K and Getis-Ord Gi∗ statistics, the northern, northwestern and southern parts of the basin indicates a high tectonic activity. On the other hand, low elevation plain in the central part of the basin area shows a relatively low tectonic activity.

  2. Application of Positron Emission Particle Tracking (PEPT) to validate a Discrete Element Method (DEM) model of granular flow and mixing in the Turbula mixer.

    PubMed

    Marigo, M; Davies, M; Leadbeater, T; Cairns, D L; Ingram, A; Stitt, E H

    2013-03-25

    The laboratory-scale Turbula mixer comprises a simple cylindrical vessel that moves with a complex, yet periodic 3D motion comprising of rotation, translation and inversion. Arising from this complexity, relatively few studies to obtain fundamental understanding of particle motion and mixing mechanisms have been reported. Particle motion within a cylindrical vessel of a Turbula mixer has been measured for 2mm glass spheres using Positron Emission Particle Tracking (PEPT) in a 2l blending mixing vessel at 50% fill level. These data are compared to results from Discrete Element Method (DEM) simulations previously published by the authors. PEPT mixing experiments, using a single particle tracer, gave qualitatively similar trends to the DEM predictions for axial and radial dispersion as well as for the axial displacement statistics at different operational speeds. Both experimental and simulation results indicate a minimum mixing efficiency at ca. 46 rpm. The occupancy plots also show a non-linear relationship with the operating speed. These results add further evidence to a transition between two flow and mixing regimes. Despite the similarity in overall flow and mixing behaviour measured and predicted, including the mixing speed at which the flow behaviour transition occurs, a systematic offset between measured and predicted result is observed. PMID:23376506

  3. Quality Test Various Existing dem in Indonesia Toward 10 Meter National dem

    NASA Astrophysics Data System (ADS)

    Amhar, Fahmi

    2016-06-01

    Indonesia has various DEM from many sources and various acquisition date spreaded in the past two decades. There are DEM from spaceborne system (Radarsat, TerraSAR-X, ALOS, ASTER-GDEM, SRTM), airborne system (IFSAR, Lidar, aerial photos) and also terrestrial one. The research objective is the quality test and how to extract best DEM in particular area. The method is using differential GPS levelling using geodetic GPS equipment on places which is ensured not changed during past 20 years. The result has shown that DEM from TerraSAR-X and SRTM30 have the best quality (rmse 3.1 m and 3.5 m respectively). Based on this research, it was inferred that these parameters are still positively correlated with the basic concept, namely that the lower and the higher the spatial resolution of a DEM data, the more imprecise the resulting vertical height.

  4. Nonlocal similarity based DEM super resolution

    NASA Astrophysics Data System (ADS)

    Xu, Zekai; Wang, Xuewen; Chen, Zixuan; Xiong, Dongping; Ding, Mingyue; Hou, Wenguang

    2015-12-01

    This paper discusses a new topic, DEM super resolution, to improve the resolution of an original DEM based on its partial new measurements obtained with high resolution. A nonlocal algorithm is introduced to perform this task. The original DEM was first divided into overlapping patches, which were classified either as "test" or "learning" data depending on whether or not they are related to high resolution measurements. For each test patch, the similar patches in the learning dataset were identified via template matching. Finally, the high resolution DEM of the test patch was restored by the weighted sum of similar patches under the condition that the reconstruction weights were the same in different resolution cases. A key assumption of this strategy is that there are some repeated or similar modes in the original DEM, which is quite common. Experiments were done to demonstrate that we can restore a DEM by preserving the details without introducing artifacts. Statistic analysis was also conducted to show that this method can obtain higher accuracy than traditional interpolation methods.

  5. Development of parallel DEM for the open source code MFIX

    SciTech Connect

    Gopalakrishnan, Pradeep; Tafti, Danesh

    2013-02-01

    The paper presents the development of a parallel Discrete Element Method (DEM) solver for the open source code, Multiphase Flow with Interphase eXchange (MFIX) based on the domain decomposition method. The performance of the code was evaluated by simulating a bubbling fluidized bed with 2.5 million particles. The DEM solver shows strong scalability up to 256 processors with an efficiency of 81%. Further, to analyze weak scaling, the static height of the fluidized bed was increased to hold 5 and 10 million particles. The results show that global communication cost increases with problem size while the computational cost remains constant. Further, the effects of static bed height on the bubble hydrodynamics and mixing characteristics are analyzed.

  6. Skeletonizing a DEM into a drainage network

    NASA Astrophysics Data System (ADS)

    Meisels, Amnon; Raizman, Sonia; Karnieli, Arnon

    1995-02-01

    A new method for extracting drainage systems from Digital Elevation Models (DEMs) is presented. The main algorithm of the proposed method performs a skeletonization process of the set of elevations in the DEM and produces a skeleton of flow paths. An enumeration algorithm performs the removal of loops from the initial flow path. A preprocess for filling depressions is described as is the necessary postprocessing for determining the drainage network through depressions. The new method does not suffer from any of the maladies of former methods described in the literature, such as flow cutoffs, loops of flow, and basin flooding. The new method is tested on several real-world DEMs and produced connected, complete, and loopless networks.

  7. Evaluation of the influence of metabolic processes and body composition on cognitive functions: Nutrition and Dementia Project (NutrDem Project).

    PubMed

    Magierski, R; Kłoszewska, I; Sobow, T

    2014-11-01

    The global increase in the prevalence of dementia and its associated comorbidities and consequences has stimulated intensive research focused on better understanding of the basic mechanisms and the possibilities to prevent and/or treat cognitive decline or dementia. The etiology of cognitive decline and dementia is very complex and is based upon the interplay of genetic and environmental factors. A growing body of epidemiological evidence has suggested that metabolic syndrome and its components may be important in the development of cognitive decline. Furthermore, an abnormal body mass index in middle age has been considered as a predictor for the development of dementia. The Nutrition and Dementia Project (NutrDem Project) was started at the Department of Old Age Psychiatry and Psychotic Disorders with close cooperation with Department of Medical Psychology. The aim of this study is to determine the effect of dietary patterns, nutritional status, body composition (with evaluation of visceral fat) and basic regulatory mechanisms of metabolism in elderly patients on cognitive functions and the risk of cognitive impairment (mild cognitive impairment and/or dementia).

  8. Evaluation of the influence of metabolic processes and body composition on cognitive functions: Nutrition and Dementia Project (NutrDem Project).

    PubMed

    Magierski, R; Kłoszewska, I; Sobow, T

    2014-11-01

    The global increase in the prevalence of dementia and its associated comorbidities and consequences has stimulated intensive research focused on better understanding of the basic mechanisms and the possibilities to prevent and/or treat cognitive decline or dementia. The etiology of cognitive decline and dementia is very complex and is based upon the interplay of genetic and environmental factors. A growing body of epidemiological evidence has suggested that metabolic syndrome and its components may be important in the development of cognitive decline. Furthermore, an abnormal body mass index in middle age has been considered as a predictor for the development of dementia. The Nutrition and Dementia Project (NutrDem Project) was started at the Department of Old Age Psychiatry and Psychotic Disorders with close cooperation with Department of Medical Psychology. The aim of this study is to determine the effect of dietary patterns, nutritional status, body composition (with evaluation of visceral fat) and basic regulatory mechanisms of metabolism in elderly patients on cognitive functions and the risk of cognitive impairment (mild cognitive impairment and/or dementia). PMID:25139556

  9. Evaluation Methods Sourcebook.

    ERIC Educational Resources Information Center

    Love, Arnold J., Ed.

    The chapters commissioned for this book describe key aspects of evaluation methodology as they are practiced in a Canadian context, providing representative illustrations of recent developments in evaluation methodology as it is currently applied. The following chapters are included: (1) "Program Evaluation with Limited Fiscal and Human Resources"…

  10. Snow Cover Mapping in the Northern Area of Pakistan and Jammu Kashmir (hindu Kush Himalayas) Using Ndsi, Unmixing Method and Srtm dem Data

    NASA Astrophysics Data System (ADS)

    Kim, H.; Din, A. U.; Oki, K.; Takeuchi, W.; Oki, T.

    2015-12-01

    Snow area measurement is very important for hydrologists, glaciologists and for climate change researchers. Field measurement is very difficult as in case of a steep and in a complex terrain such as Himalayas, therefore we rely on remote sensing (both active and passive) data. Usually snow area is calculated from reflectance data using different snow index e.g. Normalize difference snow index (NDSI) and then it is translated into snow area. However, in most cases we are actually calculating the planimetric area or grid area of every pixel. The actual snow is along the surface of the terrain and proper estimation can only be done if actual surface area is calculated along the slope within each pixel. In the past, some researchers have introduced methodologies and optimized old mechanisms. However, the orographical impact in calculating snow area (fraction), especially in steep mountainous regions, still has many problems, and many times these problems are usually ignored which leads to under estimation of total snow amount. In this study we calculated the actual surface area from SRTM version 4.1 90m (at equator) processed DEM data provided by CGIAR-CSI. MODIS Reflectance (MOD09A1 L3 Product) composite data of 500m resolution for 2010 and 2011 in the northern areas of Pakistan, Jammu & Kashmir region where great Himalayas are stretched was used to calculate snow cover using NDSI index. Threshold of NDSI>0.4 was set to classify snow or no snow for the clear pixels and for further classification, unmixing method (subjective pixel method only) was used to calculate snow fraction within each pixel. Results shows that in a complex terrain such as Himalayas, ratio of surface to planimetric snow area is more than 50%. This means that it should be taken into consideration for more realistic snow amount estimation. Seasonal snow fraction histogram from unmixing method indicates that NDSI measures snow cover area by 1.86 times more in cold season (maximum snow area) and 1

  11. TanDEM-X calibrated Raw DEM generation

    NASA Astrophysics Data System (ADS)

    Rossi, Cristian; Rodriguez Gonzalez, Fernando; Fritz, Thomas; Yague-Martinez, Nestor; Eineder, Michael

    2012-09-01

    The TanDEM-X mission successfully started on June 21st 2010 with the launch of the German radar satellite TDX, placed in orbit in close formation with the TerraSAR-X (TSX) satellite, and establishing the first spaceborne bistatic interferometer. The processing of SAR raw data to the Raw DEM is performed by one single processor, the Integrated TanDEM-X Processor (ITP). The quality of the Raw DEM is a fundamental parameter for the mission planning. In this paper, a novel quality indicator is derived. It is based on the comparison of the interferometric measure, the unwrapped phase, and the stereo-radargrammetric measure, the geometrical shifts computed in the coregistration stage. By stating the accuracy of the unwrapped phase, it constitutes a useful parameter for the determination of problematic scenes, which will be resubmitted to the dual baseline phase unwrapping processing chain for the mitigation of phase unwrapping errors. The stereo-radargrammetric measure is also operationally used for the Raw DEM absolute calibration through an accurate estimation of the absolute phase offset. This paper examines the interferometric algorithms implemented for the operational TanDEM-X Raw DEM generation, focusing particularly on its quality assessment and its calibration.

  12. Satellite-derived Digital Elevation Model (DEM) selection, preparation and correction for hydrodynamic modelling in large, low-gradient and data-sparse catchments

    NASA Astrophysics Data System (ADS)

    Jarihani, Abdollah A.; Callow, John N.; McVicar, Tim R.; Van Niel, Thomas G.; Larsen, Joshua R.

    2015-05-01

    Digital Elevation Models (DEMs) that accurately replicate both landscape form and processes are critical to support modelling of environmental processes. Topographic accuracy, methods of preparation and grid size are all important for hydrodynamic models to efficiently replicate flow processes. In remote and data-scarce regions, high resolution DEMs are often not available and therefore it is necessary to evaluate lower resolution data such as the Shuttle Radar Topography Mission (SRTM) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) for use within hydrodynamic models. This paper does this in three ways: (i) assessing point accuracy and geometric co-registration error of the original DEMs; (ii) quantifying the effects of DEM preparation methods (vegetation smoothed and hydrologically-corrected) on hydrodynamic modelling relative accuracy; and (iii) quantifying the effect of the hydrodynamic model grid size (30-2000 m) and the associated relative computational costs (run time) on relative accuracy in model outputs. We initially evaluated the accuracy of the original SRTM (∼30 m) seamless C-band DEM (SRTM DEM) and second generation products from the ASTER (ASTER GDEM) against registered survey marks and altimetry data points from the Ice, Cloud, and land Elevation Satellite (ICESat). SRTM DEM (RMSE = 3.25 m,) had higher accuracy than ASTER GDEM (RMSE = 7.43 m). Based on these results, the original version of SRTM DEM, the ASTER GDEM along with vegetation smoothed and hydrologically corrected versions were prepared and used to simulate three flood events along a 200 km stretch of the low-gradient Thompson River, in arid Australia (using five metrics: peak discharge, peak height, travel time, terminal water storage and flood extent). The hydrologically corrected DEMs performed best across these metrics in simulating floods compared with vegetation smoothed DEMs and original DEMs. The response of model performance to grid size was non

  13. Hydrologic enforcement of lidar DEMs

    USGS Publications Warehouse

    Poppenga, Sandra K.; Worstell, Bruce B.; Danielson, Jeffrey J.; Brock, John C.; Evans, Gayla A.; Heidemann, H. Karl

    2014-01-01

    Hydrologic-enforcement (hydro-enforcement) of light detection and ranging (lidar)-derived digital elevation models (DEMs) modifies the elevations of artificial impediments (such as road fills or railroad grades) to simulate how man-made drainage structures such as culverts or bridges allow continuous downslope flow. Lidar-derived DEMs contain an extremely high level of topographic detail; thus, hydro-enforced lidar-derived DEMs are essential to the U.S. Geological Survey (USGS) for complex modeling of riverine flow. The USGS Coastal and Marine Geology Program (CMGP) is integrating hydro-enforced lidar-derived DEMs (land elevation) and lidar-derived bathymetry (water depth) to enhance storm surge modeling in vulnerable coastal zones.

  14. Separability of soils in a tallgrass prairie using SPOT and DEM data

    NASA Technical Reports Server (NTRS)

    Su, Haiping; Ransom, Michel D.; Yang, Shie-Shien; Kanemasu, Edward T.

    1990-01-01

    An investigation is conducted which uses a canonical transformation technique to reduce the features from SPOT and DEM data and evaluates the statistical separability of several prairie soils from the canonically transformed variables. Both SPOT and DEM data was gathered for a tallgrass prairie near Manhattan, Kansas, and high resolution SPOT satellite images were integrated with DEM data. Two canonical variables derived from training samples were selected and it is suggested that canonically transformed data were superior to combined SPOT and DEM data. High resolution SPOT images and DEM data can be used to aid second-order soil surveys in grasslands.

  15. EMDataBank unified data resource for 3DEM

    PubMed Central

    Lawson, Catherine L.; Patwardhan, Ardan; Baker, Matthew L.; Hryc, Corey; Garcia, Eduardo Sanz; Hudson, Brian P.; Lagerstedt, Ingvar; Ludtke, Steven J.; Pintilie, Grigore; Sala, Raul; Westbrook, John D.; Berman, Helen M.; Kleywegt, Gerard J.; Chiu, Wah

    2016-01-01

    Three-dimensional Electron Microscopy (3DEM) has become a key experimental method in structural biology for a broad spectrum of biological specimens from molecules to cells. The EMDataBank project provides a unified portal for deposition, retrieval and analysis of 3DEM density maps, atomic models and associated metadata (emdatabank.org). We provide here an overview of the rapidly growing 3DEM structural data archives, which include maps in EM Data Bank and map-derived models in the Protein Data Bank. In addition, we describe progress and approaches toward development of validation protocols and methods, working with the scientific community, in order to create a validation pipeline for 3DEM data. PMID:26578576

  16. The Double Hierarchy Method. A parallel 3D contact method for the interaction of spherical particles with rigid FE boundaries using the DEM

    NASA Astrophysics Data System (ADS)

    Santasusana, Miquel; Irazábal, Joaquín; Oñate, Eugenio; Carbonell, Josep Maria

    2016-07-01

    In this work, we present a new methodology for the treatment of the contact interaction between rigid boundaries and spherical discrete elements (DE). Rigid body parts are present in most of large-scale simulations. The surfaces of the rigid parts are commonly meshed with a finite element-like (FE) discretization. The contact detection and calculation between those DE and the discretized boundaries is not straightforward and has been addressed by different approaches. The algorithm presented in this paper considers the contact of the DEs with the geometric primitives of a FE mesh, i.e. facet, edge or vertex. To do so, the original hierarchical method presented by Horner et al. (J Eng Mech 127(10):1027-1032, 2001) is extended with a new insight leading to a robust, fast and accurate 3D contact algorithm which is fully parallelizable. The implementation of the method has been developed in order to deal ideally with triangles and quadrilaterals. If the boundaries are discretized with another type of geometries, the method can be easily extended to higher order planar convex polyhedra. A detailed description of the procedure followed to treat a wide range of cases is presented. The description of the developed algorithm and its validation is verified with several practical examples. The parallelization capabilities and the obtained performance are presented with the study of an industrial application example.

  17. Methodologies for watershed modeling with GIS and DEMs for the parameterization of the WEPP model

    NASA Astrophysics Data System (ADS)

    Cochrane, Thomas Arey

    Two methods called the Hillslope and Flowpath methods were developed that use geographical information systems (GIS) and digital elevation models (DEMs) to assess water erosion in small watersheds with the Water Erosion Prediction Project (WEPP) model. The Hillslope method is an automated method for the application of WEPP through the extraction of hillslopes and channels from DEMs. Each hillslope is represented as a rectangular area with a representative slope profile that drains to the top or sides of a single channel. The Hillslope method was further divided into the Calcleng and Chanleng methods, which are similar in every way except on how the hillslope lengths are calculated. The Calcleng method calculates a representative length of hillslope based on the weighted lengths of all flowpaths in a hillslope as identified through a DEM. The Chanleng method calculates the length of hillslopes adjacent to channels by matching the width of the hillslope to the length of adjacent channel. The Flowpath method works by applying the WEPP model to all possible flowpaths within a watershed as identified from a DEM. However, this method does not currently have a channel routing component, which limits its use to predicting spatially variable erosion on hillslopes within the watershed or from watersheds whose channels are not in a depositional or erodible mode. These methods were evaluated with six research watersheds from across the U.S., one from Treynor, Iowa, two from Watkinsville, Georgia, and three from Holly Springs, Mississippi. The effects of using different-sized DEM resolutions on simulations and the ability to accurately predict sediment yield and runoff from different event sizes were studied. Statistical analyses for all methods, resolutions, and event sizes were performed by comparing predicted vs. measured runoff and sediment yield from the watershed outlets on an event by event basis. Comparisons to manual applications by expert users and comparisons of

  18. Improving merge methods for grid-based digital elevation models

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; Prodanović, D.; Maksimović, Č.

    2016-03-01

    Digital Elevation Models (DEMs) are used to represent the terrain in applications such as, for example, overland flow modelling or viewshed analysis. DEMs generated from digitising contour lines or obtained by LiDAR or satellite data are now widely available. However, in some cases, the area of study is covered by more than one of the available elevation data sets. In these cases the relevant DEMs may need to be merged. The merged DEM must retain the most accurate elevation information available while generating consistent slopes and aspects. In this paper we present a thorough analysis of three conventional grid-based DEM merging methods that are available in commercial GIS software. These methods are evaluated for their applicability in merging DEMs and, based on evaluation results, a method for improving the merging of grid-based DEMs is proposed. DEMs generated by the proposed method, called MBlend, showed significant improvements when compared to DEMs produced by the three conventional methods in terms of elevation, slope and aspect accuracy, ensuring also smooth elevation transitions between the original DEMs. The results produced by the improved method are highly relevant different applications in terrain analysis, e.g., visibility, or spotting irregularities in landforms and for modelling terrain phenomena, such as overland flow.

  19. Incorporating DEM Uncertainty in Coastal Inundation Mapping

    PubMed Central

    Leon, Javier X.; Heuvelink, Gerard B. M.; Phinn, Stuart R.

    2014-01-01

    Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey

  20. LNG Safety Assessment Evaluation Methods

    SciTech Connect

    Muna, Alice Baca; LaFleur, Angela Christine

    2015-05-01

    Sandia National Laboratories evaluated published safety assessment methods across a variety of industries including Liquefied Natural Gas (LNG), hydrogen, land and marine transportation, as well as the US Department of Defense (DOD). All the methods were evaluated for their potential applicability for use in the LNG railroad application. After reviewing the documents included in this report, as well as others not included because of repetition, the Department of Energy (DOE) Hydrogen Safety Plan Checklist is most suitable to be adapted to the LNG railroad application. This report was developed to survey industries related to rail transportation for methodologies and tools that can be used by the FRA to review and evaluate safety assessments submitted by the railroad industry as a part of their implementation plans for liquefied or compressed natural gas storage ( on-board or tender) and engine fueling delivery systems. The main sections of this report provide an overview of various methods found during this survey. In most cases, the reference document is quoted directly. The final section provides discussion and a recommendation for the most appropriate methodology that will allow efficient and consistent evaluations to be made. The DOE Hydrogen Safety Plan Checklist was then revised to adapt it as a methodology for the Federal Railroad Administration’s use in evaluating safety plans submitted by the railroad industry.

  1. Effect of DEM mesh size on AnnAGNPS simulation and slope correction.

    PubMed

    Wang, Xiaoyan; Lin, Q

    2011-08-01

    The objective of this paper is to study the impact of the mesh size of the digital elevation model (DEM) on terrain attributes within an Annualized AGricultural NonPoint Source pollution (AnnAGNPS) Model simulation at watershed scale and provide a correction of slope gradient for low resolution DEMs. The effect of different grid sizes of DEMs on terrain attributes was examined by comparing eight DEMs (30, 40, 50, 60, 70, 80, 90, and 100 m). The accuracy of the AnnAGNPS stimulation on runoff, sediments, and nutrient loads is evaluated. The results are as follows: (1) Rnoff does not vary much with decrease of DEM resolution whereas soil erosion and total nitrogen (TN) load change prominently. There is little effect on runoff simulation of AnnAGNPS modeling by the amended slope using an adjusted 50 m DEM. (2) A decrease of sediment yield and TN load is observed with an increase of DEM mesh size from 30 to 60 m; a slight decrease of sediment and TN load with the DEM mesh size bigger than 60 m. There is similar trend for total phosphorus (TP) variation, but with less range of variation, the simulation of sediment, TN, and TP increase, in which sediment increase up to 1.75 times compared to the model using unadjusted 50 m DEM. In all, the amended simulation still has a large difference relative to the results using 30 m DEM. AnnAGNPS is less reliable for sediment loading prediction in a small hilly watershed. (3) Resolution of DEM has significant impact on slope gradient. The average, minimum, maximum of slope from the various DEMs reduced obviously with the decrease of DEM precision. For the grade of 0∼15°, the slopes at lower resolution DEM are generally bigger than those at higher resolution DEM. But for the grade bigger than 15°, the slopes at lower resolution DEM are generally smaller than those at higher resolution DEM. So it is necessary to adjust the slope with a fitting equation. A cubic model is used for correction of slope gradient from lower resolution to

  2. DEM simulation of granular flow in a Couette device

    NASA Astrophysics Data System (ADS)

    Vidyapati, Vidyapati; Kheripour Langrudi, M.; Tardos, Gabriel; Sun, Jin; Sundaresan, Sankaran; Subramaniam, Shankar

    2009-11-01

    We study the shear motion of granular material in an annular shear cell operated in batch and continuous modes. In order to quantitatively simulate shear behavior of granular material composed of spherical shaped grains, a 3D discrete element method (DEM) is used. The ultimate goal of the present work is to compare DEM results for the normal and shear stresses in stationary and moving granular beds confined in Couette device with experimental results. The DEM captures the experimental observation of transition behavior from quasi-- static (in batch mode operation) to rapid flow (in continuous mode operation) regime of granular flows. Although there are quantitative differences between DEM model predictions and experiments, the qualitative features are nicely reproduced. It is observed (both in experiments and in simulations) that the intermediate regime is broad enough to require a critical assessment of continuum models for granular flows.

  3. Evaluation of turbulence mitigation methods

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; Huebner, Claudia S.; Dijk, Judith; Schutte, Klamer; Schwering, Piet B. W.

    2014-05-01

    Atmospheric turbulence is a well-known phenomenon that diminishes the recognition range in visual and infrared image sequences. There exist many different methods to compensate for the effects of turbulence. This paper focuses on the performance of two software-based methods to mitigate the effects of low- and medium turbulence conditions. Both methods are capable of processing static and dynamic scenes. The first method consists of local registration, frame selection, blur estimation and deconvolution. The second method consists of local motion compensation, fore- /background segmentation and weighted iterative blind deconvolution. A comparative evaluation using quantitative measures is done on some representative sequences captured during a NATO SET 165 trial in Dayton. The amount of blurring and tilt in the imagery seem to be relevant measures for such an evaluation. It is shown that both methods improve the imagery by reducing the blurring and tilt and therefore enlarge the recognition range. Furthermore, results of a recognition experiment using simulated data are presented that show that turbulence mitigation using the first method improves the recognition range up to 25% for an operational optical system.

  4. Radar and Lidar Radar DEM

    NASA Technical Reports Server (NTRS)

    Liskovich, Diana; Simard, Marc

    2011-01-01

    Using radar and lidar data, the aim is to improve 3D rendering of terrain, including digital elevation models (DEM) and estimates of vegetation height and biomass in a variety of forest types and terrains. The 3D mapping of vegetation structure and the analysis are useful to determine the role of forest in climate change (carbon cycle), in providing habitat and as a provider of socio-economic services. This in turn will lead to potential for development of more effective land-use management. The first part of the project was to characterize the Shuttle Radar Topography Mission DEM error with respect to ICESat/GLAS point estimates of elevation. We investigated potential trends with latitude, canopy height, signal to noise ratio (SNR), number of LiDAR waveform peaks, and maximum peak width. Scatter plots were produced for each variable and were fitted with 1st and 2nd degree polynomials. Higher order trends were visually inspected through filtering with a mean and median filter. We also assessed trends in the DEM error variance. Finally, a map showing how DEM error was geographically distributed globally was created.

  5. DEM Particle Fracture Model

    SciTech Connect

    Zhang, Boning; Herbold, Eric B.; Homel, Michael A.; Regueiro, Richard A.

    2015-12-01

    An adaptive particle fracture model in poly-ellipsoidal Discrete Element Method is developed. The poly-ellipsoidal particle will break into several sub-poly-ellipsoids by Hoek-Brown fracture criterion based on continuum stress and the maximum tensile stress in contacts. Also Weibull theory is introduced to consider the statistics and size effects on particle strength. Finally, high strain-rate split Hopkinson pressure bar experiment of silica sand is simulated using this newly developed model. Comparisons with experiments show that our particle fracture model can capture the mechanical behavior of this experiment very well, both in stress-strain response and particle size redistribution. The effects of density and packings o the samples are also studied in numerical examples.

  6. Impacts of DEM uncertainties on critical source areas identification for non-point source pollution control based on SWAT model

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Dong, Guangxia; Wang, Qingrui; Liu, Lumeng; Yu, Wenwen; Men, Cong; Liu, Ruimin

    2016-09-01

    The impacts of different digital elevation model (DEM) resolutions, sources and resampling techniques on nutrient simulations using the Soil and Water Assessment Tool (SWAT) model have not been well studied. The objective of this study was to evaluate the sensitivities of DEM resolutions (from 30 m to 1000 m), sources (ASTER GDEM2, SRTM and Topo-DEM) and resampling techniques (nearest neighbor, bilinear interpolation, cubic convolution and majority) to identification of non-point source (NPS) critical source area (CSA) based on nutrient loads using the SWAT model. The Xiangxi River, one of the main tributaries of Three Gorges Reservoir in China, was selected as the study area. The following findings were obtained: (1) Elevation and slope extracted from the DEMs were more sensitive to DEM resolution changes. Compared with the results of the 30 m DEM, 1000 m DEM underestimated the elevation and slope by 104 m and 41.57°, respectively; (2) The numbers of subwatersheds and hydrologic response units (HRUs) were considerably influenced by DEM resolutions, but the total nitrogen (TN) and total phosphorus (TP) loads of each subwatershed showed higher correlations with different DEM sources; (3) DEM resolutions and sources had larger effects on CSAs identifications, while TN and TP CSAs showed different response to DEM uncertainties. TN CSAs were more sensitive to resolution changes, exhibiting six distribution patterns at all DEM resolutions. TP CSAs were sensitive to source and resampling technique changes, exhibiting three distribution patterns for DEM sources and two distribution patterns for DEM resampling techniques. DEM resolutions and sources are the two most sensitive SWAT model DEM parameters that must be considered when nutrient CSAs are identified.

  7. Processing, validating, and comparing DEMs for geomorphic application on the Puna de Atacama Plateau, northwest Argentina

    NASA Astrophysics Data System (ADS)

    Purinton, Benjamin; Bookhagen, Bodo

    2016-04-01

    This study analyzes multiple topographic datasets derived from various remote-sensing methods from the Pocitos Basin of the central Puna Plateau in northwest Argentina at the border to Chile. Here, the arid climate and clear atmospheric conditions and lack of vegetation provide ideal conditions for remote sensing and Digital Elevation Model (DEM) comparison. We compare the following freely available DEMs: SRTM-X (spatial resolution of ~30 m), SRTM-C v4.1 (90 m), and ASTER GDEM2 (30 m). Additional DEMs for comparison are generated from optical and radar datasets acquired freely (ASTER Level 1B stereo pairs and Sentinal-1A radar), through research agreements (RapidEye Level 1B scenes, ALOS radar, and ENVISAT radar), and through commercial sources (TerraSAR-X / TanDEM-X radar). DEMs from ASTER (spatial resolution of 15 m) and RapidEye (~5-10 m) optical datasets are produced by standard photogrammetric techniques and have been post-processed for validation and alignment purposes. Because RapidEye scenes are captured at a low incidence angle (<20°) and stereo pairs are unavailable, merging and averaging methods of two to four overlapping scenes is explored for effective DEM generation. Sentinal-1A, TerraSAR-X / TanDEM-X, ALOS, and ENVISAT radar data is processed through interferometry resulting in DEMs with spatial resolutions ranging from 5 to 30 meters. The SRTM-X dataset serves as a control in the creation of further DEMs, as it is widely used in the geosciences and represents the highest-quality DEM currently available. All DEMs are validated against over 400,000 differential GPS (dGPS) measurements gathered during four field campaigns in 2012 and 2014 to 2016. Of these points, more than 250,000 lie within the Pocitos Basin with average vertical and horizontal accuracies of 0.95 m and 0.69 m, respectively. Dataset accuracy is judged by the lowest standard deviations of elevation compared with the dGPS data and with the SRTM-X control DEM. Of particular interest in

  8. Generating DEM from LIDAR data - comparison of available software tools

    NASA Astrophysics Data System (ADS)

    Korzeniowska, K.; Lacka, M.

    2011-12-01

    In recent years many software tools and applications have appeared that offer procedures, scripts and algorithms to process and visualize ALS data. This variety of software tools and of "point cloud" processing methods contributed to the aim of this study: to assess algorithms available in various software tools that are used to classify LIDAR "point cloud" data, through a careful examination of Digital Elevation Models (DEMs) generated from LIDAR data on a base of these algorithms. The works focused on the most important available software tools: both commercial and open source ones. Two sites in a mountain area were selected for the study. The area of each site is 0.645 sq km. DEMs generated with analysed software tools ware compared with a reference dataset, generated using manual methods to eliminate non ground points. Surfaces were analysed using raster analysis. Minimum, maximum and mean differences between reference DEM and DEMs generated with analysed software tools were calculated, together with Root Mean Square Error. Differences between DEMs were also examined visually using transects along the grid axes in the test sites.

  9. Monte Carlo Markov chain DEM reconstruction of isothermal plasmas

    NASA Astrophysics Data System (ADS)

    Landi, E.; Reale, F.; Testa, P.

    2012-02-01

    Context. Recent studies carried out with SOHO and Hinode high-resolution spectrometers have shown that the plasma in the off-disk solar corona is close to isothermal. If confirmed, these findings may have significant consequences for theoretical models of coronal heating. However, these studies have been carried out with diagnostic techniques whose ability to reconstruct the plasma distribution with temperature has not been thoroughly tested. Aims: In this paper, we carry out tests on the Monte Carlo Markov chain (MCMC) technique with the aim of determining: 1) its ability to retrieve isothermal plasmas from a set of spectral line intensities, with and without random noise; 2) to what extent can it discriminate between an isothermal solution and a narrow multithermal distribution; and 3) how well it can detect multiple isothermal components along the line of sight. We also test the effects of 4) atomic data uncertainties on the results, and 5) the number of ions whose lines are available for the DEM reconstruction. Methods: We first use the CHIANTI database to calculate synthetic spectra from different thermal distributions: single isothermal plasmas, multithermal plasmas made of multiple isothermal components, and multithermal plasmas with a Gaussian DEM distribution with variable width. We then apply the MCMC technique on each of these synthetic spectra, so that the ability of the MCMC technique at reconstructing the original thermal distribution can be evaluated. Next, we add a random noise to the synthetic spectra, and repeat the exercise, in order to determine the effects of random errors on the results. We also we repeat the exercise using a different set of atomic data from those used to calculate synthetic line intensities, to understand the robustness of the results against atomic physics uncertainties. The size of the temperature bin of the MCMC reconstruction is varied in all cases, in order to determine the optimal width. Results: We find that the MCMC

  10. Visualising DEM-related flood-map uncertainties using a disparity-distance equation algorithm

    NASA Astrophysics Data System (ADS)

    Brandt, S. Anders; Lim, Nancy J.

    2016-05-01

    The apparent absoluteness of information presented by crisp-delineated flood boundaries can lead to misconceptions among planners about the inherent uncertainties associated in generated flood maps. Even maps based on hydraulic modelling using the highest-resolution digital elevation models (DEMs), and calibrated with the most optimal Manning's roughness (n) coefficients, are susceptible to errors when compared to actual flood boundaries, specifically in flat areas. Therefore, the inaccuracies in inundation extents, brought about by the characteristics of the slope perpendicular to the flow direction of the river, have to be accounted for. Instead of using the typical Monte Carlo simulation and probabilistic methods for uncertainty quantification, an empirical-based disparity-distance equation that considers the effects of both the DEM resolution and slope was used to create prediction-uncertainty zones around the resulting inundation extents of a one-dimensional (1-D) hydraulic model. The equation was originally derived for the Eskilstuna River where flood maps, based on DEM data of different resolutions, were evaluated for the slope-disparity relationship. To assess whether the equation is applicable to another river with different characteristics, modelled inundation extents from the Testebo River were utilised and tested with the equation. By using the cross-sectional locations, water surface elevations, and DEM, uncertainty zones around the original inundation boundary line can be produced for different confidences. The results show that (1) the proposed method is useful both for estimating and directly visualising model inaccuracies caused by the combined effects of slope and DEM resolution, and (2) the DEM-related uncertainties alone do not account for the total inaccuracy of the derived flood map. Decision-makers can apply it to already existing flood maps, thereby recapitulating and re-analysing the inundation boundaries and the areas that are uncertain

  11. The Importance of Precise Digital Elevation Models (DEM) in Modelling Floods

    NASA Astrophysics Data System (ADS)

    Demir, Gokben; Akyurek, Zuhal

    2016-04-01

    Digital elevation Models (DEM) are important inputs for topography for the accurate modelling of floodplain hydrodynamics. Floodplains have a key role as natural retarding pools which attenuate flood waves and suppress flood peaks. GPS, LIDAR and bathymetric surveys are well known surveying methods to acquire topographic data. It is not only time consuming and expensive to obtain topographic data through surveying but also sometimes impossible for remote areas. In this study it is aimed to present the importance of accurate modelling of topography for flood modelling. The flood modelling for Samsun-Terme in Blacksea region of Turkey is done. One of the DEM is obtained from the point observations retrieved from 1/5000 scaled orthophotos and 1/1000 scaled point elevation data from field surveys at x-sections. The river banks are corrected by using the orthophotos and elevation values. This DEM is named as scaled DEM. The other DEM is obtained from bathymetric surveys. 296 538 number of points and the left/right bank slopes were used to construct the DEM having 1 m spatial resolution and this DEM is named as base DEM. Two DEMs were compared by using 27 x-sections. The maximum difference at thalweg of the river bed is 2m and the minimum difference is 20 cm between two DEMs. The channel conveyance capacity in base DEM is larger than the one in scaled DEM and floodplain is modelled in detail in base DEM. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. The model by using two DEMs were calibrated for a flood event (July 9, 2012). The roughness is considered as the calibration parameter. From comparison of input hydrograph at the upstream of the river and output hydrograph at the downstream of the river, the attenuation is obtained as 91% and 84% for the base DEM and scaled DEM, respectively. The time lag in hydrographs does not show any difference for two DEMs and it is obtained as 3 hours. Maximum flood extents differ for the two DEMs

  12. Pragmatism, Evidence, and Mixed Methods Evaluation

    ERIC Educational Resources Information Center

    Hall, Jori N.

    2013-01-01

    Mixed methods evaluation has a long-standing history of enhancing the credibility of evaluation findings. However, using mixed methods in a utilitarian way implicitly emphasizes convenience over engaging with its philosophical underpinnings (Denscombe, 2008). Because of this, some mixed methods evaluators and social science researchers have been…

  13. A photogrammetric DEM of Greenland based on 1978-1987 aerial photos: validation and integration with laser altimetry and satellite-derived DEMs

    NASA Astrophysics Data System (ADS)

    Korsgaard, N. J.; Kjaer, K. H.; Nuth, C.; Khan, S. A.

    2014-12-01

    Here we present a DEM of Greenland covering all ice-free terrain and the margins of the GrIS and local glaciers and ice caps. The DEM is based on the 3534 photos used in the aero-triangulation which were recorded by the Danish Geodata Agency (then the Geodetic Institute) in survey campaigns spanning the period 1978-1987. The GrIS is covered tens of kilometers into the interior due to the large footprints of the photos (30 x 30 km) and control provided by the aero-triangulation. Thus, the data are ideal for providing information for analysis of ice marginal elevation change and also control for satellite-derived DEMs.The results of the validation, error assessments and predicted uncertainties are presented. We test the DEM using Airborne Topographic Mapper (IceBridge ATM) as reference data; evaluate the a posteriori covariance matrix from the aero-triangulation; and co-register DEM blocks of 50 x 50 km to ICESat laser altimetry in order to evaluate the coherency.We complement the aero-photogrammetric DEM with modern laser altimetry and DEMs derived from stereoscopic satellite imagery (AST14DMO) to examine the mass variability of the Northeast Greenland Ice Stream (NEGIS). Our analysis suggests that dynamically-induced mass loss started around 2003 and continued throughout 2014.

  14. High-precise DEM Generation Using Envisat/ERS-2 Cross-interferometry

    NASA Astrophysics Data System (ADS)

    Lee, W.; Jung, H.; Lu, Z.; Zhang, L.

    2010-12-01

    Cross-interferometric synthetic aperture radar (CInSAR) technique from ERS-2 and Envisat images is capable of generating submeter-accuracy digital elevation model (DEM). However, it is very difficult to produce high-quality CInSAR-derived DEM due to the difference in the azimuth and range pixel size between ERS-2 and Envisat images as well as the small height ambiguity of CInSAR interferogram. In this study, we have proposed an efficient method to overcome the problems, produced a high-quality DEM over northern Alaska, and assessed the accuracy of the CInSAR-derived DEM with an airborne InSAR-derived DEM, which has the spatial resolution of 5 meters, from U.S. Geological Survey. In the proposed method, azimuth common band filtering in the radar raw data processing and DEM-assisted SAR coregistration are applied to mitigate the mis-registration due to the difference in the azimuth and range pixel size and large baseline, and differential SAR interferogram (DInSAR) created by using the low-quality DEM is used for reducing the unwrapping error occurred by the high fringe rate of CInSAR interferogram. From accuracy assessment, in flat area, the precision of CInSAR-derived DEM was approximately 4.2 m and 70cm in the horizontal and vertical directions, respectively, and the ground resolution estimated by the wave number analysis was about 15m. However, most errors occurred in around water area (like lake). And generating time is different between Airborne DEM (July, 2002) and CInSAR DEM(January, 2008). Focus on land area (not around water), vertical accuracy is highly improved about submeter unit. Our results indicate that high-precise DEM of submeter-accuracy can be generated by the proposed method.

  15. Wiederbeginn nach dem Zweiten Weltkrieg

    NASA Astrophysics Data System (ADS)

    Strecker, Heinrich; Bassenge-Strecker, Rosemarie

    Dieses Kapitel schildert zunächst die Ausgangslage für die Statistik in Deutschland nach dem Zweiten Weltkrieg: Der statistische Dienst in den Besatzungszonen musste teilweise erst aufgebaut und der statistische Unterricht an den Hochschulen wieder in Gang gebracht werden. In dieser Lage ergriff der Präsident des Bayerischen Statistischen Landesamtes, Karl Wagner, tatkräftig unterstützt von Gerhard Fürst, dem späteren Präsidenten des Statistischen Bundesamtes, die Initiative zur Neugründung der Deutschen Statistischen Gesellschaft (DStatG). Die Gründungsversammlung 1948 im München wurde zu einem Meilenstein in der Geschichte der DStatG. Ziel war es, alle Statistiker zur Zusammenarbeit anzuregen, ihre Qualifikation an das internationale Niveau heranzuführen und die Anwendung neuerer statistischer Methoden in der Praxis zu fördern. Es folgten 24 Jahre fruchtbarer Arbeit unter Karl Wagner (1948-1960) und Gerhard Fürst (1960-1972). Der Beitrag skizziert die Statistischen Wochen, die Tätigkeit der Ausschüsse und die Veröffentlichungen in dieser Zeit.

  16. Evaluation of DNA and RNA extraction methods.

    PubMed

    Edwin Shiaw, C S; Shiran, M S; Cheah, Y K; Tan, G C; Sabariah, A R

    2010-06-01

    This study was done to evaluate various DNA and RNA extractions from archival FFPE tissues. A total of 30 FFPE blocks from the years of 2004 to 2006 were assessed with each modified and adapted method. Extraction protocols evaluated include the modified enzymatic extraction method (Method A), Chelex-100 extraction method (Method B), heat-induced retrieval in alkaline solution extraction method (Methods C and D) and one commercial FFPE DNA Extraction kit (Qiagen, Crawley, UK). For RNA extraction, 2 extraction protocols were evaluated including the enzymatic extraction method (Method 1), and Chelex-100 RNA extraction method (Method 2). Results show that the modified enzymatic extraction method (Method A) is an efficient DNA extraction protocol, while for RNA extraction, the enzymatic method (Method 1) and the Chelex-100 RNA extraction method (Method 2) are equally efficient RNA extraction protocols.

  17. TecDEM: A MATLAB based toolbox for tectonic geomorphology, Part 1: Drainage network preprocessing and stream profile analysis

    NASA Astrophysics Data System (ADS)

    Shahzad, Faisal; Gloaguen, Richard

    2011-02-01

    We present TecDEM, a software shell implemented in MATLAB that applies tectonic geomorphologic tasks to digital elevation models (DEMs). The first part of this paper series describes drainage partitioning schemes and stream profile analysis. The graphical user interface of TecDEM provides several options: determining flow directions, stream vectorization, watershed delineation, Strahler order labeling, stream profile generation, knickpoints selection, Concavity, Steepness and Hack indices calculations. The knickpoints along selected streams as well as stream profile analysis, and Hack index per stream profile are computed using a semi-automatic method. TecDEM was used to extract and investigate the stream profiles in the Kaghan Valley (Northern Pakistan). Our interpretations of the TecDEM results correlate well with previous tectonic evolution models for this region. TecDEM is designed to assist geoscientists in applying complex tectonic geomorphology tasks to global DEM data.

  18. Shuttle radar DEM hydrological correction for erosion modelling in small catchments

    NASA Astrophysics Data System (ADS)

    Jarihani, Ben; Sidle, Roy; Bartley, Rebecca

    2016-04-01

    Digital Elevation Models (DEMs) that accurately replicate both landscape form and processes are critical to support modelling of environmental processes. Catchment and hillslope scale runoff and sediment processes (i.e., patterns of overland flow, infiltration, subsurface stormflow and erosion) are all topographically mediated. In remote and data-scarce regions, high resolution DEMs (LiDAR) are often not available, and moderate to course resolution digital elevation models (e.g., SRTM) have difficulty replicating detailed hydrological patterns, especially in relatively flat landscapes. Several surface reconditioning algorithms (e.g., Smoothing) and "Stream burning" techniques (e.g., Agree or ANUDEM), in conjunction with representation of the known stream networks, have been used to improve DEM performance in replicating known hydrology. Detailed stream network data are not available at regional and national scales, but can be derived at local scales from remotely-sensed data. This research explores the implication of high resolution stream network data derived from Google Earth images for DEM hydrological correction, instead of using course resolution stream networks derived from topographic maps. The accuracy of implemented method in producing hydrological-efficient DEMs were assessed by comparing the hydrological parameters derived from modified DEMs and limited high-resolution airborne LiDAR DEMs. The degree of modification is dominated by the method used and availability of the stream network data. Although stream burning techniques improve DEMs hydrologically, these techniques alter DEM characteristics that may affect catchment boundaries, stream position and length, as well as secondary terrain derivatives (e.g., slope, aspect). Modification of a DEM to better reflect known hydrology can be useful, however, knowledge of the magnitude and spatial pattern of the changes are required before using a DEM for subsequent analyses.

  19. Evaluation Methods of The Text Entities

    ERIC Educational Resources Information Center

    Popa, Marius

    2006-01-01

    The paper highlights some evaluation methods to assess the quality characteristics of the text entities. The main concepts used in building and evaluation processes of the text entities are presented. Also, some aggregated metrics for orthogonality measurements are presented. The evaluation process for automatic evaluation of the text entities is…

  20. TanDEM-X high resolution DEMs and their applications to flow modeling

    NASA Astrophysics Data System (ADS)

    Wooten, Kelly M.

    Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.

  1. Evaluation of Rhenium Joining Methods

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.; Morren, Sybil H.

    1995-01-01

    Coupons of rhenium-to-Cl03 flat plate joints, formed by explosive and diffusion bonding, were evaluated in a series of shear tests. Shear testing was conducted on as-received, thermally-cycled (100 cycles, from 21 to 1100 C), and thermally-aged (3 and 6 hrs at 1100 C) joint coupons. Shear tests were also conducted on joint coupons with rhenium and/or Cl03 electron beam welded tabs to simulate the joint's incorporation into a structure. Ultimate shear strength was used as a figure of merit to assess the effects of the thermal treatment and the electron beam welding of tabs on the joint coupons. All of the coupons survived thermal testing intact and without any visible degradation. Two different lots of as-received, explosively-bonded joint coupons had ultimate shear strengths of 281 and 310 MPa and 162 and 223 MPa, respectively. As-received, diffusion-bonded coupons had ultimate shear strengths of 199 and 348 MPa. For the most part, the thermally-treated and rhenium weld tab coupons had shear strengths slightly reduced or within the range of the as-received values. Coupons with Cl03 weld tabs experienced a significant reduction in shear strength. The degradation of strength appeared to be the result of a poor heat sink provided during the electron beam welding. The Cl03 base material could not dissipate heat as effectively as rhenium, leading to the formation of a brittle rhenium-niobium intermetallic.

  2. Methods of Generating and Evaluating Hypertext.

    ERIC Educational Resources Information Center

    Blustein, James; Staveley, Mark S.

    2001-01-01

    Focuses on methods of generating and evaluating hypertext. Highlights include historical landmarks; nonlinearity; literary hypertext; models of hypertext; manual, automatic, and semi-automatic generation of hypertext; mathematical models for hypertext evaluation, including computing coverage and correlation; human factors in evaluation; and…

  3. Evaluation Methods for Intelligent Tutoring Systems Revisited

    ERIC Educational Resources Information Center

    Greer, Jim; Mark, Mary

    2016-01-01

    The 1993 paper in "IJAIED" on evaluation methods for Intelligent Tutoring Systems (ITS) still holds up well today. Basic evaluation techniques described in that paper remain in use. Approaches such as kappa scores, simulated learners and learning curves are refinements on past evaluation techniques. New approaches have also arisen, in…

  4. Icesat Validation of Tandem-X I-Dems Over the UK

    NASA Astrophysics Data System (ADS)

    Feng, L.; Muller, J.-P.

    2016-06-01

    From the latest TanDEM-X mission (bistatic X-Band interferometric SAR), globally consistent Digital Elevation Model (DEM) will be available from 2017, but their accuracy has not yet been fully characterised. This paper presents the methods and implementation of statistical procedures for the validation of the vertical accuracy of TanDEM-X iDEMs at grid-spacing of approximately 12.5 m, 30 m and 90 m based on processed ICESat data over the UK in order to assess their potential extrapolation across the globe. The accuracy of the TanDEM-X iDEM in UK was obtained as follows: against ICESat GLA14 elevation data, TanDEM-X iDEM has -0.028±3.654 m over England and Wales and 0.316 ± 5.286 m over Scotland for 12 m, -0.073 ± 6.575 m for 30 m, and 0.0225 ± 9.251 m at 90 m. Moreover, 90 % of all results at the three resolutions of TanDEM-X iDEM data (with a linear error at 90 % confidence level) are below 16.2 m. These validation results also indicate that derivative topographic parameters (slope, aspect and relief) have a strong effect on the vertical accuracy of the TanDEM-X iDEMs. In high-relief and large slope terrain, large errors and data voids are frequent, and their location is strongly influenced by topography, whilst in the low- to medium-relief and low slope sites, errors are smaller. ICESat derived elevations are heavily influenced by surface slope within the 70 m footprint as well as there being slope dependent errors in the TanDEM-X iDEMs.

  5. Evaluation Measures and Methods: Some Intersections.

    ERIC Educational Resources Information Center

    Elliott, John

    The literature is reviewed for four combinations of evaluation measures and methods: traditional methods with traditional measures (T-Meth/T-Mea), nontraditional methods with traditional measures (N-Meth/T-Mea), traditional measures with nontraditional measures (T-Meth/N-Mea), and nontraditional methods with nontraditional measures (N-Meth/N-Mea).…

  6. Urban DEM generation, analysis and enhancements using TanDEM-X

    NASA Astrophysics Data System (ADS)

    Rossi, Cristian; Gernhardt, Stefan

    2013-11-01

    This paper analyzes the potential of the TanDEM-X mission for the generation of urban Digital Elevation Models (DEMs). The high resolution of the sensors and the absence of temporal decorrelation are exploited. The interferometric chain and the problems encountered for correct mapping of urban areas are analyzed first. The operational Integrated TanDEM-X Processor (ITP) algorithms are taken as reference. The ITP main product is called the raw DEM. Whereas the ITP coregistration stage is demonstrated to be robust enough, large improvements in the raw DEM such as fewer percentages of phase unwrapping errors, can be obtained by using adaptive fringe filters instead of the conventional ones in the interferogram generation stage. The shape of the raw DEM in the layover area is also shown and determined to be regular for buildings with vertical walls. Generally, in the presence of layover, the raw DEM exhibits a height ramp, resulting in a height underestimation for the affected structure. Examples provided confirm the theoretical background. The focus is centered on high resolution DEMs produced using spotlight acquisitions. In particular, a raw DEM over Berlin (Germany) with a 2.5 m raster is generated and validated. For this purpose, ITP is modified in its interferogram generation stage by adopting the Intensity Driven Adaptive Neighbourhood (IDAN) algorithm. The height Root Mean Square Error (RMSE) between the raw DEM and a reference is about 8 m for the two classes defining the urban DEM: structures and non-structures. The result can be further improved for the structure class using a DEM generated with Persistent Scatterer Interferometry. A DEM fusion is thus proposed and a drop of about 20% in the RMSE is reported.

  7. Mapping debris-flow hazard in Honolulu using a DEM

    USGS Publications Warehouse

    Ellen, Stephen D.; Mark, Robert K.

    1993-01-01

    A method for mapping hazard posed by debris flows has been developed and applied to an area near Honolulu, Hawaii. The method uses studies of past debris flows to characterize sites of initiation, volume at initiation, and volume-change behavior during flow. Digital simulations of debris flows based on these characteristics are then routed through a digital elevation model (DEM) to estimate degree of hazard over the area.

  8. Bathymetric survey of water reservoirs in north-eastern Brazil based on TanDEM-X satellite data.

    PubMed

    Zhang, Shuping; Foerster, Saskia; Medeiros, Pedro; de Araújo, José Carlos; Motagh, Mahdi; Waske, Bjoern

    2016-11-15

    Water scarcity in the dry season is a vital problem in dryland regions such as northeastern Brazil. Water supplies in these areas often come from numerous reservoirs of various sizes. However, inventory data for these reservoirs is often limited due to the expense and time required for their acquisition via field surveys, particularly in remote areas. Remote sensing techniques provide a valuable alternative to conventional reservoir bathymetric surveys for water resource management. In this study single pass TanDEM-X data acquired in bistatic mode were used to generate digital elevation models (DEMs) in the Madalena catchment, northeastern Brazil. Validation with differential global positioning system (DGPS) data from field measurements indicated an absolute elevation accuracy of approximately 1m for the TanDEM-X derived DEMs (TDX DEMs). The DEMs derived from TanDEM-X data acquired at low water levels show significant advantages over bathymetric maps derived from field survey, particularly with regard to coverage, evenly distributed measurements and replication of reservoir shape. Furthermore, by mapping the dry reservoir bottoms with TanDEM-X data, TDX DEMs are free of emergent and submerged macrophytes, independent of water depth (e.g. >10m), water quality and even weather conditions. Thus, the method is superior to other existing bathymetric mapping approaches, particularly for inland water bodies. The proposed approach relies on (nearly) dry reservoir conditions at times of image acquisition and is thus restricted to areas that show considerable water levels variations. However, comparisons between TDX DEM and the bathymetric map derived from field surveys show that the amount of water retained during the dry phase has only marginal impact on the total water volume derivation from TDX DEM. Overall, DEMs generated from bistatic TanDEM-X data acquired in low water periods constitute a useful and efficient data source for deriving reservoir bathymetry and show

  9. Application of the PROMETHEE technique to determine depression outlet location and flow direction in DEM

    NASA Astrophysics Data System (ADS)

    Chou, Tien-Yin; Lin, Wen-Tzu; Lin, Chao-Yuan; Chou, Wen-Chieh; Huang, Pi-Hui

    2004-02-01

    With the fast growing progress of computer technologies, spatial information on watersheds such as flow direction, watershed boundaries and the drainage network can be automatically calculated or extracted from a digital elevation model (DEM). The stubborn problem that depressions exist in DEMs has been frequently encountered while extracting the spatial information of terrain. Several filling-up methods have been proposed for solving depressions. However, their suitability for large-scale flat areas is inadequate. This study proposes a depression watershed method coupled with the Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEEs) theory to determine the optimal outlet and calculate the flow direction in depressions. Three processing procedures are used to derive the depressionless flow direction: (1) calculating the incipient flow direction; (2) establishing the depression watershed by tracing the upstream drainage area and determining the depression outlet using PROMETHEE theory; (3) calculating the depressionless flow direction. The developed method was used to delineate the Shihmen Reservoir watershed located in Northern Taiwan. The results show that the depression watershed method can effectively solve the shortcomings such as depression outlet differentiating and looped flow direction between depressions. The suitability of the proposed approach was verified.

  10. DEM interpolation weight calculation modulus based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Chen, Tian-wei; Yang, Xia

    2015-12-01

    There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.

  11. Hair Evaluation Methods: Merits and Demerits

    PubMed Central

    Dhurat, Rachita; Saraogi, Punit

    2009-01-01

    Various methods are available for evaluation (for diagnosis and/or quantification) of a patient presenting with hair loss. Hair evaluation methods are grouped into three main categories: Non-invasive methods (e.g., questionnaire, daily hair counts, standardized wash test, 60-s hair count, global photographs, dermoscopy, hair weight, contrasting felt examination, phototrichogram, TrichoScan and polarizing and surface electron microscopy), semi-invasive methods (e.g., trichogram and unit area trichogram) and invasive methods (e.g., scalp biopsy). Any single method is neither 'ideal' nor feasible. However, when interpreted with caution, these are valuable tools for patient diagnosis and monitoring. Daily hair counts, wash test, etc. are good methods for primary evaluation of the patient and to get an approximate assessment of the amount of shedding. Some methods like global photography form an important part of any hair clinic. Analytical methods like phototrichogram are usually possible only in the setting of a clinical trial. Many of these methods (like the scalp biopsy) require expertise for both processing and interpreting. We reviewed the available literature in detail in light of merits and demerits of each method. A plethora of newer methods is being introduced, which are relevant to the cosmetic industry/research. Such methods as well as metabolic/hormonal evaluation are not included in this review. PMID:20927232

  12. Elektromagnetische Strahlung. Informationen aus dem Weltall.

    NASA Astrophysics Data System (ADS)

    Schäfer, H.

    Contents: Informationen aus dem Weltall. Neue und zukünftige Geräte. Wichtiges und Interessantes aus der Positionsastronomie. Die Helligkeit der Sterne und anderer astronomischer Objekte. Spektroskopie und Spektralanalyse. Beobachtungen außerhalb des optischen Bereiches.

  13. Creating High Quality DEMs of Large Scale Fluvial Environments Using Structure-from-Motion

    NASA Astrophysics Data System (ADS)

    Javernick, L. A.; Brasington, J.; Caruso, B. S.; Hicks, M.; Davies, T. R.

    2012-12-01

    During the past decade, advances in survey and sensor technology have generated new opportunities to investigate the structure and dynamics of fluvial systems. Key geomatic technologies include the Global Positioning System (GPS), digital photogrammetry, LiDAR, and terrestrial laser scanning (TLS). The application of such has resulted in a profound increase in the dimensionality of topographic surveys - from cross-sections to distributed 3d point clouds and digital elevation models (DEMs). Each of these technologies have been used successfully to derive high quality DEMs of fluvial environments; however, they often require specialized and expensive equipment, such as a TLS or large format camera, bespoke platforms such as survey aircraft, and consequently make data acquisition prohibitively expensive or highly labour intensive, thus restricting the extent and frequency of surveys. Recently, advances in computer vision and image analysis have led to development of a novel photogrammetric approach that is fully automated and suitable for use with simple compact (non-metric) cameras. In this paper, we evaluate a new photogrammetric method, Structure-from-Motion (SfM), and demonstrate how this can be used to generate DEMs of comparable quality to airborne LiDAR, using consumer grade cameras at low costs. Using the SfM software PhotoScan (version 0.8.5), high quality DEMs were produced for a 1.6 km reach and a 3.3 km reach of the braided Ahuriri River, New Zealand. Photographs used for DEM creation were acquired from a helicopter flying at 600 m and 800 m above ground level using a consumer grade 10.1mega-pixel, non-metric digital camera, resulting in object space resolution imagery of 0.12 m and 0.16 m respectively. Point clouds for the two study reaches were generated using 147 and 224 photographs respectively, and were extracted automatically in an arbitrary coordinate system; RTK-GPS located ground control points (GCPs) were used to define a 3d non

  14. Simulation method for evaluating progressive addition lenses.

    PubMed

    Qin, Linling; Qian, Lin; Yu, Jingchi

    2013-06-20

    Since progressive addition lenses (PALs) are currently state-of-the-art in multifocal correction for presbyopia, it is important to study the methods for evaluating PALs. A nonoptical simulation method used to accurately characterize PALs during the design and optimization process is proposed in this paper. It involves the direct calculation of each surface of the lens according to the lens heights of front and rear surfaces. The validity of this simulation method for the evaluation of PALs is verified by the good agreement with Rotlex method. In particular, the simulation with a "correction action" included into the design process is potentially a useful method with advantages of time-saving, convenience, and accuracy. Based on the eye-plus-lens model, which is established through an accurate ray tracing calculation along the gaze direction, the method can find an excellent application in actually evaluating the wearer performance for optimal design of more comfortable, satisfactory, and personalized PALs. PMID:23842170

  15. An Operator Method for Evaluating Laplace Transforms

    ERIC Educational Resources Information Center

    Lanoue, B. G.; Yurekli, O.

    2005-01-01

    This note discusses a simple operator technique based on the differentiation and shifting properties of the Laplace transform to find Laplace transforms for various elementary functions. The method is simpler than known integration techniques to evaluate Laplace transforms.

  16. Assessment of Uncertainty Propagation from DEM's on Small Scale Typologically-Differentiated Landslide Susceptibility in Romania

    NASA Astrophysics Data System (ADS)

    Cosmin Sandric, Ionut; Chitu, Zenaida; Jurchescu, Marta; Malet, Jean-Philippe; Ciprian Margarint, Mihai; Micu, Mihai

    2015-04-01

    An increasing number of free and open access global digital elevation models has become available in the past 15 years and these DEMs have been widely used for the assessment of landslide susceptibility at medium and small scales. Even though the global vertical and horizontal accuracies of each DEM are known, what it is still unknown is the uncertainty that propagates from the first and second derivatives of DEMs, like slope gradient, into the final landslide susceptibility map For the present study we focused on the assessment of the uncertainty propagation from the following digital elevation models: SRTM 90m spatial resolution, ASTERDEM 30m spatial resolution, EUDEM 30m spatial resolution and the latest release SRTM 30m spatial resolution. From each DEM dataset the slope gradient was generated and used in the landslide susceptibility analysis. A restricted number of spatial predictors are used for landslide susceptibility assessment, represented by lithology, land-cover and slope, were the slope is the only predictor that changes with each DEM. The study makes use of the first national landslide inventory (Micu et al, 2014) obtained from compiling literature data, personal or institutional landslide inventories. The landslide inventory contains more than 27,900 cases classified in three main categories: slides flows and falls The results present landslide susceptibility maps obtained from each DEM and from the combinations of DEM datasets. Maps with uncertainty propagation at country level and differentiated by topographic regions from Romania and by landslide typology (slides, flows and falls) are obtained for each DEM dataset and for the combinations of these. An objective evaluation of each DEM dataset and a final map of landslide susceptibility and the associated uncertainty are provided

  17. Evaluation of Sight, Sound, Symbol Instructional Method.

    ERIC Educational Resources Information Center

    Massarotti, Michael C.; Slaichert, William M.

    Evaluated was the Sight-Sound-Symbol (S-S-S) method of teaching basic reading skills with four groups of 16 trainable mentally retarded children. The method involved use of a musical keyboard to teach children to identify numbers, letters, colors, and shapes. Groups either received individual S-S-S instruction for 10 minutes daily, received S-S-S…

  18. Assessment and Evaluation Methods for Access Services

    ERIC Educational Resources Information Center

    Long, Dallas

    2014-01-01

    This article serves as a primer for assessment and evaluation design by describing the range of methods commonly employed in library settings. Quantitative methods, such as counting and benchmarking measures, are useful for investigating the internal operations of an access services department in order to identify workflow inefficiencies or…

  19. Scenario-Based Validation of Moderate Resolution DEMs Freely Available for Complex Himalayan Terrain

    NASA Astrophysics Data System (ADS)

    Singh, Mritunjay Kumar; Gupta, R. D.; Snehmani; Bhardwaj, Anshuman; Ganju, Ashwagosha

    2016-02-01

    Accuracy of the Digital Elevation Model (DEM) affects the accuracy of various geoscience and environmental modelling results. This study evaluates accuracies of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM Version-2 (GDEM V2), the Shuttle Radar Topography Mission (SRTM) X-band DEM and the NRSC Cartosat-1 DEM V1 (CartoDEM). A high resolution (1 m) photogrammetric DEM (ADS80 DEM), having a high absolute accuracy [1.60 m linear error at 90 % confidence (LE90)], resampled at 30 m cell size was used as reference. The overall root mean square error (RMSE) in vertical accuracy was 23, 73, and 166 m and the LE90 was 36, 75, and 256 m for ASTER GDEM V2, SRTM X-band DEM and CartoDEM, respectively. A detailed error analysis was performed for individual as well as combinations of different classes of aspect, slope, land-cover and elevation zones for the study area. For the ASTER GDEM V2, forest areas with North facing slopes (0°-5°) in the 4th elevation zone (3773-4369 m) showed minimum LE90 of 0.99 m, and barren with East facing slopes (>60°) falling under the 2nd elevation zone (2581-3177 m) showed maximum LE90 of 166 m. For the SRTM DEM, pixels with South-East facing slopes of 0°-5° in the 4th elevation zone covered with forest showed least LE90 of 0.33 m and maximum LE90 of 521 m was observed in the barren area with North-East facing slope (>60°) in the 4th elevation zone. In case of the CartoDEM, the snow pixels in the 2nd elevation zone with South-East facing slopes of 5°-15° showed least LE90 of 0.71 m and maximum LE90 of 1266 m was observed for the snow pixels in the 3rd elevation zone (3177-3773 m) within the South facing slope of 45°-60°. These results can be highly useful for the researchers using DEM products in various modelling exercises.

  20. A description of rotations for DEM models of particle systems

    NASA Astrophysics Data System (ADS)

    Campello, Eduardo M. B.

    2015-06-01

    In this work, we show how a vector parameterization of rotations can be adopted to describe the rotational motion of particles within the framework of the discrete element method (DEM). It is based on the use of a special rotation vector, called Rodrigues rotation vector, and accounts for finite rotations in a fully exact manner. The use of fictitious entities such as quaternions or complicated structures such as Euler angles is thereby circumvented. As an additional advantage, stick-slip friction models with inter-particle rolling motion are made possible in a consistent and elegant way. A few examples are provided to illustrate the applicability of the scheme. We believe that simple vector descriptions of rotations are very useful for DEM models of particle systems.

  1. Influence of the external DEM on PS-InSAR processing and results on Northern Appennine slopes

    NASA Astrophysics Data System (ADS)

    Bayer, B.; Schmidt, D. A.; Simoni, A.

    2014-12-01

    We present an InSAR analysis of slow moving landslide in the Northern Appennines, Italy, and assess the dependencies on the choice of DEM. In recent years, advanced processing techniques for synthetic aperture radar interferometry (InSAR) have been applied to measure slope movements. The persistent scatterers (PS-InSAR) approach is probably the most widely used and some codes are now available in the public domain. The Stanford method of Persistent Scatterers (StamPS) has been successfully used to analyze landslide areas. One problematic step in the processing chain is the choice of an external DEM that is used to model and remove the topographic phase in a series of interferograms in order to obtain the phase contribution caused by surface deformation. The choice is not trivial, because the PS InSAR results differ significantly in terms of PS identification, positioning, and the resulting deformation signal. We use four different DEMs to process a set of 18 ASAR (Envisat) scenes over a mountain area (~350 km2) of the Northern Appennines of Italy, using StamPS. Slow-moving landslides control the evolution of the landscape and cover approximately 30% of the territory. Our focus in this presentation is to evaluate the influence of DEM resolution and accuracy by comparing PS-InSAR results. On an areal basis, we perform a statistical analysis of displacement time-series to make the comparison. We also consider two case studies to illustrate the differences in terms of PS identification, number and estimated displacements. It is clearly shown that DEM accuracy positively influences the number of PS, while line-of-sight rates differ from case to case and can result in deformation signals that are difficult to interpret. We also take advantage of statistical tools to analyze the obtained time-series datasets for the whole study area. Results indicate differences in the style and amount of displacement that can be related to the accuracy of the employed DEM.

  2. Glacier Volume Change Estimation Using Time Series of Improved Aster Dems

    NASA Astrophysics Data System (ADS)

    Girod, Luc; Nuth, Christopher; Kääb, Andreas

    2016-06-01

    Volume change data is critical to the understanding of glacier response to climate change. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a unique source of systematic stereoscopic images covering the whole globe at 15m resolution and at a consistent quality for over 15 years. While satellite stereo sensors with significantly improved radiometric and spatial resolution are available to date, the potential of ASTER data lies in its long consistent time series that is unrivaled, though not fully exploited for change analysis due to lack of data accuracy and precision. Here, we developed an improved method for ASTER DEM generation and implemented it in the open source photogrammetric library and software suite MicMac. The method relies on the computation of a rational polynomial coefficients (RPC) model and the detection and correction of cross-track sensor jitter in order to compute DEMs. ASTER data are strongly affected by attitude jitter, mainly of approximately 4 km and 30 km wavelength, and improving the generation of ASTER DEMs requires removal of this effect. Our sensor modeling does not require ground control points and allows thus potentially for the automatic processing of large data volumes. As a proof of concept, we chose a set of glaciers with reference DEMs available to assess the quality of our measurements. We use time series of ASTER scenes from which we extracted DEMs with a ground sampling distance of 15m. Our method directly measures and accounts for the cross-track component of jitter so that the resulting DEMs are not contaminated by this process. Since the along-track component of jitter has the same direction as the stereo parallaxes, the two cannot be separated and the elevations extracted are thus contaminated by along-track jitter. Initial tests reveal no clear relation between the cross-track and along-track components so that the latter seems not to be

  3. Automatic Detection and Boundary Extraction of Lunar Craters Based on LOLA DEM Data

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ling, ZongCheng; Zhang, Jiang; Wu, ZhongChen

    2015-07-01

    Impact-induced circular structures, known as craters, are the most obvious geographic and geomorphic features on the Moon. The studies of lunar carters' patterns and spatial distributions play an important role in understanding geologic processes of the Moon. In this paper, we proposed a method based on digital elevation model (DEM) data from lunar orbiter laser altimeter to detect the lunar craters automatically. Firstly, the DEM data of study areas are converted to a series of spatial fields having different scales, in which all overlapping depressions are detected in order (larger depressions first, then the smaller ones). Then, every depression's true boundary is calculated by Fourier expansion and shape parameters are computed. Finally, we recognize the craters from training sets manually and build a binary decision tree to automatically classify the identified depressions into craters and non-craters. In addition, our crater-detection method can provides a fast and reliable evaluation of ages of lunar geologic units, which is of great significance in lunar stratigraphy studies as well as global geologic mapping.

  4. Evaluating methods for estimating existential risks.

    PubMed

    Tonn, Bruce; Stiefel, Dorian

    2013-10-01

    Researchers and commissions contend that the risk of human extinction is high, but none of these estimates have been based upon a rigorous methodology suitable for estimating existential risks. This article evaluates several methods that could be used to estimate the probability of human extinction. Traditional methods evaluated include: simple elicitation; whole evidence Bayesian; evidential reasoning using imprecise probabilities; and Bayesian networks. Three innovative methods are also considered: influence modeling based on environmental scans; simple elicitation using extinction scenarios as anchors; and computationally intensive possible-worlds modeling. Evaluation criteria include: level of effort required by the probability assessors; level of effort needed to implement the method; ability of each method to model the human extinction event; ability to incorporate scientific estimates of contributory events; transparency of the inputs and outputs; acceptability to the academic community (e.g., with respect to intellectual soundness, familiarity, verisimilitude); credibility and utility of the outputs of the method to the policy community; difficulty of communicating the method's processes and outputs to nonexperts; and accuracy in other contexts. The article concludes by recommending that researchers assess the risks of human extinction by combining these methods. PMID:23551083

  5. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ∼0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ∼2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ∼0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  6. Hydrologic validation of a structure-from-motion DEM derived from low-altitude UAV imagery

    NASA Astrophysics Data System (ADS)

    Steiner, Florian; Marzolff, Irene; d'Oleire-Oltmanns, Sebastian

    2015-04-01

    The increasing ease of use of current Unmanned Aerial Vehicles (UAVs) and 3D image processing software has spurred the number of applications relying on high-resolution topographic datasets. Of particular significance in this field is "structure from motion" (SfM), a photogrammetric technique used to generate low-cost digital elevation models (DEMs) for erosion budgeting, measuring of glaciers/lava-flows, archaeological applications and others. It was originally designed to generate 3D-models of buildings, based on unordered collections of images and has become increasingly common in geoscience applications during the last few years. Several studies on the accuracy of this technique already exist, in which the SfM data is mostly compared with Lidar-generated terrain data. The results are mainly positive, indicating that the technique is suitable for such applications. This work aims at validating very high resolution SfM DEMs with a different approach: Not the original elevation data is validated, but data on terrain-related hydrological and geomorphometric parameters derived from the DEM. The study site chosen for this analysis is an abandoned agricultural field near the city of Taroudant, in the semi-arid southern part of Morocco. The site is characterized by aggressive rill and gully erosion and is - apart from sparsely scattered shrub cover - mainly featureless. An area of 5.7 ha, equipped with 30 high-precision ground control points (GCPs), was covered with an unmanned aerial vehicle (UAV) in two different heights (85 and 170 m). A selection of 160 images was used to generate several high-resolution DEMs (2 and 5 cm resolution) of the area using the fully automated SfM software AGISOFT Photoscan. For comparison purposes, a conventional photogrammetry-based workflow using the Leica Photogrammetry Suite was used to generate a DEM with a resolution of 5 cm (LPS DEM). The evaluation is done by comparison of the SfM DEM with the derived orthoimages and the LPS DEM

  7. Graphical methods for evaluating covering arrays

    SciTech Connect

    Kim, Youngil; Jang, Dae -Heung; Anderson-Cook, Christine M.

    2015-08-10

    Covering arrays relax the condition of orthogonal arrays by only requiring that all combination of levels be covered but not requiring that the appearance of all combination of levels be balanced. This allows for a much larger number of factors to be simultaneously considered but at the cost of poorer estimation of the factor effects. To better understand patterns between sets of columns and evaluate the degree of coverage to compare and select between alternative arrays, we suggest several new graphical methods that show some of the patterns of coverage for different designs. As a result, these graphical methods for evaluating covering arrays are illustrated with a few examples.

  8. Development of a 'bare-earth' SRTM DEM product

    NASA Astrophysics Data System (ADS)

    O'Loughlin, Fiachra; Paiva, Rodrigo; Durand, Michael; Alsdorf, Douglas; Bates, Paul

    2015-04-01

    We present the methodology and results from the development of a near-global 'bare-earth' Digital Elevation Model (DEM) derived from the Shuttle Radar Topography Mission (SRTM) data. Digital Elevation Models are the most important input for hydraulic modelling, as the DEM quality governs the accuracy of the model outputs. While SRTM is currently the best near-globally [60N to 60S] available DEM, it requires adjustments to reduce the vegetation contamination and make it useful for hydrodynamic modelling over heavily vegetated areas (e.g. tropical wetlands). Unlike previous methods of accounting for vegetation contamination, which concentrated on correcting relatively small areas and usually applied a static adjustment, we account for vegetation contamination globally and apply a spatial varying correction, based on information about canopy height and density. In creating the final 'bare-earth' SRTM DEM dataset, we produced three different 'bare-earth' SRTM products. The first applies global parameters, while the second and third products apply parameters that are regionalised based on either climatic zones or vegetation types, respectively. We also tested two different canopy density proxies of different spatial resolution. Using ground elevations obtained from the ICESat GLA14 satellite altimeter, we calculate the residual errors for the raw SRTM and the three 'bare-earth' SRTM products and compare performances. The three 'bare-earth' products all show large improvements over the raw SRTM in vegetated areas with the overall mean bias reduced by between 75 and 92% from 4.94 m to 0.40 m. The overall standard deviation is reduced by between 29 and 33 % from 7.12 m to 4.80 m. As expected, improvements are higher in areas with denser vegetation. The final 'bare-earth' SRTM dataset is available at 3 arc-second with lower vertical height errors and less noise than the original SRTM product.

  9. Efficient parallel CFD-DEM simulations using OpenMP

    NASA Astrophysics Data System (ADS)

    Amritkar, Amit; Deb, Surya; Tafti, Danesh

    2014-01-01

    The paper describes parallelization strategies for the Discrete Element Method (DEM) used for simulating dense particulate systems coupled to Computational Fluid Dynamics (CFD). While the field equations of CFD are best parallelized by spatial domain decomposition techniques, the N-body particulate phase is best parallelized over the number of particles. When the two are coupled together, both modes are needed for efficient parallelization. It is shown that under these requirements, OpenMP thread based parallelization has advantages over MPI processes. Two representative examples, fairly typical of dense fluid-particulate systems are investigated, including the validation of the DEM-CFD and thermal-DEM implementation with experiments. Fluidized bed calculations are performed on beds with uniform particle loading, parallelized with MPI and OpenMP. It is shown that as the number of processing cores and the number of particles increase, the communication overhead of building ghost particle lists at processor boundaries dominates time to solution, and OpenMP which does not require this step is about twice as fast as MPI. In rotary kiln heat transfer calculations, which are characterized by spatially non-uniform particle distributions, the low overhead of switching the parallelization mode in OpenMP eliminates the load imbalances, but introduces increased overheads in fetching non-local data. In spite of this, it is shown that OpenMP is between 50-90% faster than MPI.

  10. Validation of DEM prediction for granular avalanches on irregular terrain

    NASA Astrophysics Data System (ADS)

    Mead, Stuart R.; Cleary, Paul W.

    2015-09-01

    Accurate numerical simulation can provide crucial information useful for a greater understanding of destructive granular mass movements such as rock avalanches, landslides, and pyroclastic flows. It enables more informed and relatively low cost investigation of significant risk factors, mitigation strategy effectiveness, and sensitivity to initial conditions, material, or soil properties. In this paper, a granular avalanche experiment from the literature is reanalyzed and used as a basis to assess the accuracy of discrete element method (DEM) predictions of avalanche flow. Discrete granular approaches such as DEM simulate the motion and collisions of individual particles and are useful for identifying and investigating the controlling processes within an avalanche. Using a superquadric shape representation, DEM simulations were found to accurately reproduce transient and static features of the avalanche. The effect of material properties on the shape of the avalanche deposit was investigated. The simulated avalanche deposits were found to be sensitive to particle shape and friction, with the particle shape causing the sensitivity to friction to vary. The importance of particle shape, coupled with effect on the sensitivity to friction, highlights the importance of quantifying and including particle shape effects in numerical modeling of granular avalanches.

  11. DEM processing for the analysis of hydraulic hazards

    NASA Astrophysics Data System (ADS)

    Dresen, M.

    2003-04-01

    The digital analysis of hydrological processes and hydraulic hazards requires high data accuracy especially for topographic data that can not be insured by standard digital elevation models (DEMs). For this reason the terrain analysis and the analysis of topographical factors are highly significant for the modelling of hydrological processes. Most of the common GIS do not fulfill these requirements and do not allow detailed process oriented analysis. As a result, the estimation of hazard potential as well as the derivation of possible effects of catastrophic events are not possible. Due to this the improvement of DEM creation and expansion of placeable methods and functionalities have high priority in hydraulic hazard assessment. We can demonstrate that the quality of DEMs can be clearly improved with the help of different extensions and adaptations. The comparison of different flood events in Europe reveal the better accuracy of topographical factors and the derived hydrological parameters. In this way the simulation of hydrological processes and hydraulic hazards can be improved.

  12. Energy efficiency assessment methods and tools evaluation

    SciTech Connect

    McMordie, K.L.; Richman, E.E.; Keller, J.M.; Dixon, D.R.

    1994-08-01

    Many different methods of assessing the energy savings potential at federal installations, and identifying attractive projects for capital investment have been used by the different federal agencies. These methods range from high-level estimating tools to detailed design tools, both manual and software assisted. These methods have different purposes and provide results that are used for different parts of the project identification, and implementation process. Seven different assessment methods are evaluated in this study. These methods were selected by the program managers at the DoD Energy Policy Office, and DOE Federal Energy Management Program (FEMP). Each of the methods was applied to similar buildings at Bolling Air Force Base (AFB), unless it was inappropriate or the method was designed to make an installation-wide analysis, rather than focusing on particular buildings. Staff at Bolling AFB controlled the collection of data.

  13. Veterinary and human vaccine evaluation methods.

    PubMed

    Knight-Jones, T J D; Edmond, K; Gubbins, S; Paton, D J

    2014-06-01

    Despite the universal importance of vaccines, approaches to human and veterinary vaccine evaluation differ markedly. For human vaccines, vaccine efficacy is the proportion of vaccinated individuals protected by the vaccine against a defined outcome under ideal conditions, whereas for veterinary vaccines the term is used for a range of measures of vaccine protection. The evaluation of vaccine effectiveness, vaccine protection assessed under routine programme conditions, is largely limited to human vaccines. Challenge studies under controlled conditions and sero-conversion studies are widely used when evaluating veterinary vaccines, whereas human vaccines are generally evaluated in terms of protection against natural challenge assessed in trials or post-marketing observational studies. Although challenge studies provide a standardized platform on which to compare different vaccines, they do not capture the variation that occurs under field conditions. Field studies of vaccine effectiveness are needed to assess the performance of a vaccination programme. However, if vaccination is performed without central co-ordination, as is often the case for veterinary vaccines, evaluation will be limited. This paper reviews approaches to veterinary vaccine evaluation in comparison to evaluation methods used for human vaccines. Foot-and-mouth disease has been used to illustrate the veterinary approach. Recommendations are made for standardization of terminology and for rigorous evaluation of veterinary vaccines.

  14. ArcGeomorphometry: A toolbox for geomorphometric characterisation of DEMs in the ArcGIS environment

    NASA Astrophysics Data System (ADS)

    Rigol-Sanchez, Juan P.; Stuart, Neil; Pulido-Bosch, Antonio

    2015-12-01

    A software tool is described for the extraction of geomorphometric land surface variables and features from Digital Elevation Models (DEMs). The ArcGeomorphometry Toolbox consists of a series of Python/Numpy processing functions, presented through an easy-to-use graphical menu for the widely used ArcGIS package. Although many GIS provide some operations for analysing DEMs, the methods are often only partially implemented and can be difficult to find and used effectively. Since the results of automated characterisation of landscapes from DEMs are influenced by the extent being considered, the resolution of the source DEM and the size of the kernel (analysis window) used for processing, we have developed a tool to allow GIS users to flexibly apply several multi-scale analysis methods to parameterise and classify a DEM into discrete land surface units. Users can control the threshold values for land surface classifications. The size of the processing kernel can be used to identify land surface features across a range of landscape scales. The pattern of land surface units from each attempt at classification is displayed immediately and can then be processed in the GIS alongside additional data that can assist with a visual assessment and comparison of a series of results. The functionality of the ArcGeomorphometry toolbox is described using an example DEM.

  15. Ice volumes in the Himalayas and Karakoram: evaluating different assessment methods

    NASA Astrophysics Data System (ADS)

    Frey, Holger; Machguth, Horst; Huggel, Christian; Bajracharya, Samjwal; Bolch, Tobias; Kulkarni, Anil; Linsbauer, Andreas; Stoffel, Markus; Salzmann, Nadine

    2013-04-01

    Knowledge about volumes and the ice thickness distribution of Himalayan and Karakoram (HK) glaciers are required for assessing the future evolution, and estimating the sea-level rise potential of these ice bodies, as well as predicting impacts on the hydrological cycle. As field measurements of glacier thicknesses are sparse and restricted to individual glaciers, ice thickness and volume assessments on a larger scale have to rely strongly on modeling approaches. Here, we estimate ice volumes of all glaciers in HK region using three different approaches, compare the results, and examine related uncertainties and variability. The approaches used include volume-thickness relations using different scaling parameters, a slope-dependent thickness estimation, and a new approach to model the ice-thickness distribution based only on digital glacier outlines and a digital elevation model (DEM). By applying different combinations of model parameters and by altering glacier areas by ±5%, uncertainties related to the different methods are evaluated. Glacier outlines have been taken from the Randolph Glacier Inventory (RGI), the International Centre for Integrated Mountain Development (ICIMOD), and minor changes and additions in some regions; topographic information has been obtained from the Shuttle Radar Topography Mission (SRTM) DEM for all methods. The volume-area scaling approach resulted in glacier volumes ranging from 3632 to 6455 km3, depending on the scaling parameters used. The slope-dependent thickness estimations generated a total ice volume of 3335 km3; and a total volume of 2955 km3 resulted from the modified ice-thickness distribution model. Results of the distributed ice thickness modeling are clearly at the lowermost bound of previous estimates, and possibly hint at an overestimation of the potential contribution from HK glaciers to sea-level rise. The range of results also indicates that volume estimations are subject to large uncertainties. Although they are

  16. Kinematic behaviour of a large earthflow defined by surface displacement monitoring, DEM differencing, and ERT imaging

    NASA Astrophysics Data System (ADS)

    Prokešová, Roberta; Kardoš, Miroslav; Tábořík, Petr; Medveďová, Alžbeta; Stacke, Václav; Chudý, František

    2014-11-01

    Large earthflow-type landslides are destructive mass movement phenomena with highly unpredictable behaviour. Knowledge of earthflow kinematics is essential for understanding the mechanisms that control its movements. The present paper characterises the kinematic behaviour of a large earthflow near the village of Ľubietová in Central Slovakia over a period of 35 years following its most recent reactivation in 1977. For this purpose, multi-temporal spatial data acquired by point-based in-situ monitoring and optical remote sensing methods have been used. Quantitative data analyses including strain modelling and DEM differencing techniques have enabled us to: (i) calculate the annual landslide movement rates; (ii) detect the trend of surface displacements; (iii) characterise spatial variability of movement rates; (iv) measure changes in the surface topography on a decadal scale; and (v) define areas with distinct kinematic behaviour. The results also integrate the qualitative characteristics of surface topography, in particular the distribution of surface structures as defined by a high-resolution DEM, and the landslide subsurface structure, as revealed by 2D resistivity imaging. Then, the ground surface kinematics of the landslide is evaluated with respect to the specific conditions encountered in the study area including slope morphology, landslide subsurface structure, and local geological and hydrometeorological conditions. Finally, the broader implications of the presented research are discussed with particular focus on the role that strain-related structures play in landslide kinematic behaviour.

  17. Evaluating Sleep Disturbance: A Review of Methods

    NASA Technical Reports Server (NTRS)

    Smith, Roy M.; Oyung, R.; Gregory, K.; Miller, D.; Rosekind, M.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    There are three general approaches to evaluating sleep disturbance in regards to noise: subjective, behavioral, and physiological. Subjective methods range from standardized questionnaires and scales to self-report measures designed for specific research questions. There are two behavioral methods that provide useful sleep disturbance data. One behavioral method is actigraphy, a motion detector that provides an empirical estimate of sleep quantity and quality. An actigraph, worn on the non-dominant wrist, provides a 24-hr estimate of the rest/activity cycle. The other method involves a behavioral response, either to a specific probe or stimuli or subject initiated (e.g., indicating wakefulness). The classic, gold standard for evaluating sleep disturbance is continuous physiological monitoring of brain, eye, and muscle activity. This allows detailed distinctions of the states and stages of sleep, awakenings, and sleep continuity. Physiological delta can be obtained in controlled laboratory settings and in natural environments. Current ambulatory physiological recording equipment allows evaluation in home and work settings. These approaches will be described and the relative strengths and limitations of each method will be discussed.

  18. Evaluation of Dynamic Methods for Earthwork Assessment

    NASA Astrophysics Data System (ADS)

    Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína

    2015-05-01

    Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.

  19. DEM interpolation based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Jiao, Limin; Liu, Yaolin

    2005-10-01

    This paper proposed a systemic resolution scheme of Digital Elevation model (DEM) interpolation based on Artificial Neural Networks (ANNs). In this paper, we employ BP network to fit terrain surface, and then detect and eliminate the samples with gross errors. This paper uses Self-organizing Feature Map (SOFM) to cluster elevation samples. The study area is divided into many more homogenous tiles after clustering. BP model is employed to interpolate DEM in each cluster. Because error samples are eliminated and clusters are built, interpolation result is better. The case study indicates that ANN interpolation scheme is feasible. It also shows that ANN can get a more accurate result by comparing ANN with polynomial and spline interpolation. ANN interpolation doesn't need to determine the interpolation function beforehand, so manmade influence is lessened. The ANN interpolation is more automatic and intelligent. At the end of the paper, we propose the idea of constructing ANN surface model. This model can be used in multi-scale DEM visualization, and DEM generalization, etc.

  20. Graphical methods for evaluating covering arrays

    DOE PAGESBeta

    Kim, Youngil; Jang, Dae -Heung; Anderson-Cook, Christine M.

    2015-08-10

    Covering arrays relax the condition of orthogonal arrays by only requiring that all combination of levels be covered but not requiring that the appearance of all combination of levels be balanced. This allows for a much larger number of factors to be simultaneously considered but at the cost of poorer estimation of the factor effects. To better understand patterns between sets of columns and evaluate the degree of coverage to compare and select between alternative arrays, we suggest several new graphical methods that show some of the patterns of coverage for different designs. As a result, these graphical methods formore » evaluating covering arrays are illustrated with a few examples.« less

  1. [Agricultural products handling: methods of feasibility evaluation].

    PubMed

    Scott, G J; Herrera, J E

    1993-06-01

    Post-harvest problems are important constraints to the expansion of production of food in many Latin American countries. Besides problems of bulkiness, perishability and seasonal production patterns, the necessity of reducing transportation costs, increasing rural employment, and finding new markets for processed products, requires the development of processing technologies. Possible processed products include a vast range of alternatives. Given limited time and resources, it is not always feasible to carry out detailed studies. Hence a practical, low-cost methodology is needed to evaluate the available options. This paper presents a series of methods to evaluate different processing possibilities. It describes in detail each method including a rapid initial assessment, market and consumer research, farm-oriented research, costs and returns analysis and finally, some marketing and promotion strategies.

  2. 3D DEM analyses of the 1963 Vajont rock slide

    NASA Astrophysics Data System (ADS)

    Boon, Chia Weng; Houlsby, Guy; Utili, Stefano

    2013-04-01

    The 1963 Vajont rock slide has been modelled using the distinct element method (DEM). The open-source DEM code, YADE (Kozicki & Donzé, 2008), was used together with the contact detection algorithm proposed by Boon et al. (2012). The critical sliding friction angle at the slide surface was sought using a strength reduction approach. A shear-softening contact model was used to model the shear resistance of the clayey layer at the slide surface. The results suggest that the critical sliding friction angle can be conservative if stability analyses are calculated based on the peak friction angles. The water table was assumed to be horizontal and the pore pressure at the clay layer was assumed to be hydrostatic. The influence of reservoir filling was marginal, increasing the sliding friction angle by only 1.6˚. The results of the DEM calculations were found to be sensitive to the orientations of the bedding planes and cross-joints. Finally, the failure mechanism was investigated and arching was found to be present at the bend of the chair-shaped slope. References Boon C.W., Houlsby G.T., Utili S. (2012). A new algorithm for contact detection between convex polygonal and polyhedral particles in the discrete element method. Computers and Geotechnics, vol 44, 73-82, doi.org/10.1016/j.compgeo.2012.03.012. Kozicki, J., & Donzé, F. V. (2008). A new open-source software developed for numerical simulations using discrete modeling methods. Computer Methods in Applied Mechanics and Engineering, 197(49-50), 4429-4443.

  3. [Methods for evaluation of penile erection hardness].

    PubMed

    Yuan, Yi-Ming; Zhou, Su; Zhang, Kai

    2010-07-01

    Penile erection hardness is one of the key factors for successful sexual intercourse, as well as an important index in the diagnosis and treatment of erectile dysfunction (ED). This article gives an overview on the component and impact factors of erection hardness, summarizes some commonly used evaluation methods, including those for objective indexes, such as Rigiscan, axial buckling test and color Doppler ultrasonography, and those for subjective indexes of ED patients, such as IIEF, the Erectile Function Domain of IIEF (IIEF-EF), and Erection Hardness Score (EHS), and discusses the characteristics of these methods.

  4. Field evaluation of a VOST sampling method

    SciTech Connect

    Jackson, M.D.; Johnson, L.D.; Fuerst, R.G.; McGaughey, J.F.; Bursey, J.T.; Merrill, R.G.

    1994-12-31

    The VOST (SW-846 Method 0030) specifies the use of Tenax{reg_sign} and a particular petroleum-based charcoal (SKC Lot 104, or its equivalent), that is no longer commercially available. In field evaluation studies of VOST methodology, a replacement petroleum-based charcoal has been used: candidate replacement sorbents for charcoal were studied, and Anasorb{reg_sign} 747, a carbon-based sorbent, was selected for field testing. The sampling train was modified to use only Anasorb{reg_sign} in the back tube and Tenax{reg_sign} in the two front tubes to avoid analytical difficulties associated with the analysis of the sequential bed back tube used in the standard VOST train. The standard (SW-846 Method 0030) and the modified VOST methods were evaluated at a chemical manufacturing facility using a quadruple probe system with quadruple trains. In this field test, known concentrations of the halogenated volatile organic compounds, that are listed in the Clean Air Act Amendments of 1990, Title 3, were introduced into the VOST train and the modified VOST train, using the same certified gas cylinder as a source of test compounds. Statistical tests of the comparability of methods were performed on a compound-by-compound basis. For most compounds, the VOST and modified VOST methods were found to be statistically equivalent.

  5. Electromagnetic Imaging Methods for Nondestructive Evaluation Applications

    PubMed Central

    Deng, Yiming; Liu, Xin

    2011-01-01

    Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions. PMID:22247693

  6. Electromagnetic imaging methods for nondestructive evaluation applications.

    PubMed

    Deng, Yiming; Liu, Xin

    2011-01-01

    Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions.

  7. Assessment methods for the evaluation of vitiligo.

    PubMed

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials. PMID:22416879

  8. Assessment methods for the evaluation of vitiligo.

    PubMed

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials.

  9. ALOS DEM quality assessment in a rugged topography, A Lebanese watershed as a case study

    NASA Astrophysics Data System (ADS)

    Abdallah, Chadi; El Hage, Mohamad; Termos, Samah; Abboud, Mohammad

    2014-05-01

    Deriving the morphometric descriptors of the Earth's surface from satellite images is a continuing application in remote sensing, which has been distinctly pushed with the increasing availability of DEMs at different scales, specifically those derived from high to very high-resolution stereoscopic and triscopic image data. The extraction of the morphometric descriptors is affected by the errors of the DEM. This study presents a procedure for assessing the quality of ALOS DEM in terms of position and morphometric indices. It involves evaluating the impact of the production parameters on the altimetric accuracy through checking height differences between Ground Control Points (GCP) and the corresponding DEM points, on the planimetric accuracy by comparing extracted drainage lines with topographic maps, and on the morphometric indices by comparing profiles extracted from the DEM with those measured on the field. A twenty set of triplet-stereo imagery from the PRISM instrument on the ALOS satellite has been processed to acquire a 5 m DEM covering the whole Lebanese territories. The Lebanese topography is characterized by its ruggedness with two parallel mountainous chains embedding a depression (The Bekaa Valley). The DEM was extracted via PCI Geomatica 2013. Each of the images required 15 GCPs and around 50 tie points. Field measurements was carried out using differential GPS (Trimble GeoXH6000, ProXRT receiver and the LaserACE 1000 Rangefinder) on Al Awali watershed (482 km2, about 5% of the Lebanese terrain). 3545 GPS points were collected at all ranges of elevation specifying the Lebanese terrain diversity, ranging from cliffy, to steep and gently undulating terrain along with narrow and wide flood plains and including predetermined profiles. Moreover, definite points such as road intersections and river beds were also measured in order to assess the extracted streams from the DEM. ArcGIS 10.1 was also utilized to extract the drainage network. Preliminary results

  10. Cryptosporidiosis: multiattribute evaluation of six diagnostic methods.

    PubMed

    MacPherson, D W; McQueen, R

    1993-02-01

    Six diagnostic methods (Giemsa staining, Ziehl-Neelsen staining, auramine-rhodamine staining, Sheather's sugar flotation, an indirect immunofluorescence procedure, and a modified concentration-sugar flotation method) for the detection of Cryptosporidium spp. in stool specimens were compared on the following attributes: diagnostic yield, cost to perform each test, ease of handling, and ability to process large numbers of specimens for screening purposes by batching. A rank ordering from least desirable to most desirable was then established for each method by using the study attributes. The process of decision analysis with respect to the laboratory diagnosis of cryptosporidiosis is discussed through the application of multiattribute utility theory to the rank ordering of the study criteria. Within a specific health care setting, a diagnostic facility will be able to calculate its own utility scores for our study attributes. Multiattribute evaluation and analysis are potentially powerful tools in the allocation of resources in the laboratory.

  11. Evaluation of Alternate Surface Passivation Methods (U)

    SciTech Connect

    Clark, E

    2005-05-31

    Stainless steel containers were assembled from parts passivated by four commercial vendors using three passivation methods. The performance of these containers in storing hydrogen isotope mixtures was evaluated by monitoring the composition of initially 50% H{sub 2} 50% D{sub 2} gas with time using mass spectroscopy. Commercial passivation by electropolishing appears to result in surfaces that do not catalyze hydrogen isotope exchange. This method of surface passivation shows promise for tritium service, and should be studied further and considered for use. On the other hand, nitric acid passivation and citric acid passivation may not result in surfaces that do not catalyze the isotope exchange reaction H{sub 2} + D{sub 2} {yields} 2HD. These methods should not be considered to replace the proprietary passivation processes of the two current vendors used at the Savannah River Site Tritium Facility.

  12. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  13. CFD-DEM simulations of current-induced dune formation and morphological evolution

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Xiao, Heng

    2016-06-01

    Understanding the fundamental mechanisms of sediment transport, particularly those during the formation and evolution of bedforms, is of critical scientific importance and has engineering relevance. Traditional approaches of sediment transport simulations heavily rely on empirical models, which are not able to capture the physics-rich, regime-dependent behaviors of the process. With the increase of available computational resources in the past decade, CFD-DEM (computational fluid dynamics-discrete element method) has emerged as a viable high-fidelity method for the study of sediment transport. However, a comprehensive, quantitative study of the generation and migration of different sediment bed patterns using CFD-DEM is still lacking. In this work, current-induced sediment transport problems in a wide range of regimes are simulated, including 'flat bed in motion', 'small dune', 'vortex dune' and suspended transport. Simulations are performed by using SediFoam, an open-source, massively parallel CFD-DEM solver developed by the authors. This is a general-purpose solver for particle-laden flows tailed for particle transport problems. Validation tests are performed to demonstrate the capability of CFD-DEM in the full range of sediment transport regimes. Comparison of simulation results with experimental and numerical benchmark data demonstrates the merits of CFD-DEM approach. In addition, the improvements of the present simulations over existing studies using CFD-DEM are presented. The present solver gives more accurate prediction of sediment transport rate by properly accounting for the influence of particle volume fraction on the fluid flow. In summary, this work demonstrates that CFD-DEM is a promising particle-resolving approach for probing the physics of current-induced sediment transport.

  14. Evaluation of Scaling Methods for Rotorcraft Icing

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Kreeger, Richard E.

    2010-01-01

    This paper reports result of an experimental study in the NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the current recommended scaling methods developed for fixed-wing unprotected surface icing applications might apply to representative rotor blades at finite angle of attack. Unlike the fixed-wing case, there is no single scaling method that has been systematically developed and evaluated for rotorcraft icing applications. In the present study, scaling was based on the modified Ruff method with scale velocity determined by maintaining constant Weber number. Models were unswept NACA 0012 wing sections. The reference model had a chord of 91.4 cm and scale model had a chord of 35.6 cm. Reference tests were conducted with velocities of 76 and 100 kt (39 and 52 m/s), droplet MVDs of 150 and 195 fun, and with stagnation-point freezing fractions of 0.3 and 0.5 at angle of attack of 0deg and 5deg. It was shown that good ice shape scaling was achieved for NACA 0012 airfoils with angle of attack lip to 5deg.

  15. Geomorphic change detection using historic maps and DEM differencing: The temporal dimension of geospatial analysis

    NASA Astrophysics Data System (ADS)

    James, L. Allan; Hodgson, Michael E.; Ghoshal, Subhajit; Latiolais, Mary Megison

    2012-01-01

    The ability to develop spatially distributed models of topographic change is presenting new capabilities in geomorphic research. High resolution maps of elevation change indicate locations, processes, and rates of geomorphic change, and provide a means of calibrating temporal simulation models. Methods of geomorphic change detection (GCD), based on gridded models, may be applied to a wide range of time periods by utilizing cartometric, remote sensing, or ground-based topographic survey data to measure volumetric change. Advantages and limitations of historical DEM reconstruction methods are reviewed with a focus on coupling them with subsequent DEMs to construct DEMs of difference (DoD), which can be created by subtracting one elevation model from another, to map erosion, deposition, and volumetric change. The period of DoD analysis can be extended to several decades if accurate historical DEMs can be generated by extracting topographic data from historical data and selecting areas where geomorphic change has been substantial. The challenge is to recognize and minimize uncertainties in data that are particularly elusive with early topographic data. This paper reviews potential sources of error in digitized topographic maps and DEMs. Although the paper is primarily a review of methods, three brief examples are presented at the end to demonstrate GCD using DoDs constructed from data extending over periods ranging from 70 to 90 years.

  16. Spaceborne radar interferometry for coastal DEM construction

    USGS Publications Warehouse

    Hong, S.-H.; Lee, C.-W.; Won, J.-S.; Kwoun, Oh-Ig; Lu, Zhiming

    2005-01-01

    Topographic features in coastal regions including tidal flats change more significantly than landmass, and are characterized by extremely low slopes. High precision DEMs are required to monitor dynamic changes in coastal topography. It is difficult to obtain coherent interferometric SAR pairs especially over tidal flats mainly because of variation of tidal conditions. Here we focus on i) coherence of multi-pass ERS SAR interferometric pairs and ii) DEM construction from ERS-ENVISAT pairs. Coherences of multi-pass ERS interferograms were good enough to construct DEM under favorable tidal conditions. Coherence in sand dominant area was generally higher than that in muddy surface. The coarse grained coastal areas are favorable for multi-pass interferometry. Utilization of ERS-ENVISAT interferometric pairs is taken a growing interest. We carried out investigation using a cross-interferometric pair with a normal baseline of about 1.3 km, a 30 minutes temporal separation and the height sensitivity of about 6 meters. Preliminary results of ERS-ENVISAT interferometry were not successful due to baseline and unfavorable scattering conditions. ?? 2005 IEEE.

  17. Evaluation of Ponseti method in neglected clubfoot

    PubMed Central

    Sinha, Abhinav; Mehtani, Anil; Sud, Alok; Vijay, Vipul; Kumar, Nishikant; Prakash, Jatin

    2016-01-01

    Background: Gentle passive manipulation and casting by the Ponseti method have become the preferred method of treatment of clubfoot presenting at an early age. However, very few studies are available in literature on the use of Ponseti method in older children. We conducted this study to find the efficacy of Ponseti method in treating neglected clubfoot, which is a major disabler of children in developing countries. Materials and Methods: 41 clubfeet in 30 patients, presenting after the walking age were evaluated to determine whether the Ponseti method is effective in treating neglected clubfoot. This is a prospective study. Pirani and Dimeglio scoring were done for all the feet before each casting to monitor the correction of deformity. Quantitative variables were expressed as mean ± standard deviation and compared between preoperative and postoperative followup using the paired t-test. Also, the relation between the Pirani and Dimeglio score, and age at presentation with the number of casts required was evaluated using Pearson's correlation coefficient. No improvement in Dimeglio or Ponseti score after 3 successive cast was regarded as failure of conservative management in our study. Results: The mean age at presentation was 3.02 years (range 1.1 - 10.3 years). The mean followup was 2.6 years (range 2–3.9 years). The mean number of casts applied to achieve final correction were 12.8 casts (range 8 - 18 casts). The mean time of immobilization in cast was 3.6 months. The mean Dimeglio score before treatment was 15.9 and after treatment were 2.07. The mean Pirani score was 5.41 before treatment and 0.12 after treatment. All feet (100%) achieved painless plantigrade feet without any extensive soft tissue surgery. 7 feet (17%) recurred in our average followup of 2.6 years. Conclusions: Painless, supple, plantigrade, and cosmetically acceptable feet were achieved in neglected clubfeet without any extensive surgery. A fair trial of conservative Ponseti method should

  18. Economic methods for multipollutant analysis and evaluation

    SciTech Connect

    Baasel, W.D.

    1985-01-01

    Since 1572, when miners' lung problems were first linked to dust, man's industrial activity has been increasingly accused of causing disease in man and harm to the environment. Since that time each compound or stream thought to be damaging has been looked at independently. If a gas stream caused the problem the bad compound compositions were reduced to an acceptable level and the problem was considered solved. What happened to substances after they were removed usually was not fully considered until the finding of an adverse effect required it. Until 1970, one usual way of getting rid of many toxic wastes was to place the, in landfills and forget about them. The discovery of sickness caused by substances escaping from the Love Canal landfill has caused a total rethinking of that procedure. This and other incidents clearly showed that taking a substance out of one stream which is discharged to the environment and placing it in another may not be an adequate solution. What must be done is to look at all streams leaving an industrial plant and devise a way to reduce the potentially harmful emissions in those streams to an acceptable level, using methods that are inexpensive. To illustrate conceptually how the environmental assessment approach is a vast improvement over the current methods, an example evaluating effluents from a coal-fired 500 MW power plant is presented. Initially only one substance in one stream is evaluated. This is sulfur oxide leaving in the flue gas.

  19. Reanalysis of the DEMS nested case-control study of lung cancer and diesel exhaust: suitability for quantitative risk assessment.

    PubMed

    Crump, Kenny S; Van Landingham, Cynthia; Moolgavkar, Suresh H; McClellan, Roger

    2015-04-01

    The International Agency for Research on Cancer (IARC) in 2012 upgraded its hazard characterization of diesel engine exhaust (DEE) to "carcinogenic to humans." The Diesel Exhaust in Miners Study (DEMS) cohort and nested case-control studies of lung cancer mortality in eight U.S. nonmetal mines were influential in IARC's determination. We conducted a reanalysis of the DEMS case-control data to evaluate its suitability for quantitative risk assessment (QRA). Our reanalysis used conditional logistic regression and adjusted for cigarette smoking in a manner similar to the original DEMS analysis. However, we included additional estimates of DEE exposure and adjustment for radon exposure. In addition to applying three DEE exposure estimates developed by DEMS, we applied six alternative estimates. Without adjusting for radon, our results were similar to those in the original DEMS analysis: all but one of the nine DEE exposure estimates showed evidence of an association between DEE exposure and lung cancer mortality, with trend slopes differing only by about a factor of two. When exposure to radon was adjusted, the evidence for a DEE effect was greatly diminished, but was still present in some analyses that utilized the three original DEMS DEE exposure estimates. A DEE effect was not observed when the six alternative DEE exposure estimates were utilized and radon was adjusted. No consistent evidence of a DEE effect was found among miners who worked only underground. This article highlights some issues that should be addressed in any use of the DEMS data in developing a QRA for DEE.

  20. Volcanic geomorphology using TanDEM-X

    NASA Astrophysics Data System (ADS)

    Poland, Michael; Kubanek, Julia

    2016-04-01

    Topography is perhaps the most fundamental dataset for any volcano, yet is surprisingly difficult to collect, especially during the course of an eruption. For example, photogrammetry and lidar are time-intensive and often expensive, and they cannot be employed when the surface is obscured by clouds. Ground-based surveys can operate in poor weather but have poor spatial resolution and may expose personnel to hazardous conditions. Repeat passes of synthetic aperture radar (SAR) data provide excellent spatial resolution, but topography in areas of surface change (from vegetation swaying in the wind to physical changes in the landscape) between radar passes cannot be imaged. The German Space Agency's TanDEM-X satellite system, however, solves this issue by simultaneously acquiring SAR data of the surface using a pair of orbiting satellites, thereby removing temporal change as a complicating factor in SAR-based topographic mapping. TanDEM-X measurements have demonstrated exceptional value in mapping the topography of volcanic environments in as-yet limited applications. The data provide excellent resolution (down to ~3-m pixel size) and are useful for updating topographic data at volcanoes where surface change has occurred since the most recent topographic dataset was collected. Such data can be used for applications ranging from correcting radar interferograms for topography, to modeling flow pathways in support of hazards mitigation. The most valuable contributions, however, relate to calculating volume changes related to eruptive activity. For example, limited datasets have provided critical measurements of lava dome growth and collapse at volcanoes including Merapi (Indonesia), Colima (Mexico), and Soufriere Hills (Montserrat), and of basaltic lava flow emplacement at Tolbachik (Kamchatka), Etna (Italy), and Kīlauea (Hawai`i). With topographic data spanning an eruption, it is possible to calculate eruption rates - information that might not otherwise be available

  1. Optimizing grid computing configuration and scheduling for geospatial analysis: An example with interpolating DEM

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei

    2011-02-01

    Many geographic analyses are very time-consuming and do not scale well when large datasets are involved. For example, the interpolation of DEMs (digital evaluation model) for large geographic areas could become a problem in practical application, especially for web applications such as terrain visualization, where a fast response is required and computational demands exceed the capacity of a traditional single processing unit conducting serial processing. Therefore, high performance and parallel computing approaches, such as grid computing, were investigated to speed up the geographic analysis algorithms, such as DEM interpolation. The key for grid computing is to configure an optimized grid computing platform for the geospatial analysis and optimally schedule the geospatial tasks within a grid platform. However, there is no research focused on this. Using DEM interoperation as an example, we report our systematic research on configuring and scheduling a high performance grid computing platform to improve the performance of geographic analyses through a systematic study on how the number of cores, processors, grid nodes, different network connections and concurrent request impact the speedup of geospatial analyses. Condor, a grid middleware, is used to schedule the DEM interpolation tasks for different grid configurations. A Kansas raster-based DEM is used for a case study and an inverse distance weighting (IDW) algorithm is used in interpolation experiments.

  2. a Near-Global Bare-Earth dem from Srtm

    NASA Astrophysics Data System (ADS)

    Gallant, J. C.; Read, A. M.

    2016-06-01

    The near-global elevation product from NASA's Shuttle Radar Topographic Mission (SRTM) has been widely used since its release in 2005 at 3 arcsecond resolution and the release of the 1 arcsecond version in late 2014 means that the full potential of the SRTM DEM can now be realised. However the routine use of SRTM for analytical purposes such as catchment hydrology, flood inundation, habitat mapping and soil mapping is still seriously impeded by the presence of artefacts in the data, primarily the offsets due to tree cover and the random noise. This paper describes the algorithms being developed to remove those offsets, based on the methods developed to produce the Australian national elevation model from SRTM data. The offsets due to trees are estimated using the GlobeLand30 (National Geomatics Center of China) and Global Forest Change (University of Maryland) products derived from Landsat, along with the ALOS PALSAR radar image data (JAXA) and the global forest canopy height map (NASA). The offsets are estimated using several processes and combined to produce a single continuous tree offset layer that is subtracted from the SRTM data. The DEM products will be made freely available on completion of the first draft product, and the assessment of that product is expected to drive further improvements to the methods.

  3. A simple FEM-DEM technique for fracture prediction in materials and structures

    NASA Astrophysics Data System (ADS)

    Zárate, Francisco; Oñate, Eugenio

    2015-09-01

    This paper presents a new computational technique for predicting the onset and evolution of fracture in a continuum in a simple manner combining the finite element method (FEM) and the discrete element method (DEM). Onset of cracking at a point is governed by a simple damage model. Once a crack is detected at an element side in the FE mesh, discrete elements are generated at the nodes sharing the side and a simple DEM mechanism is considered to follow the evolution of the crack. The combination of the DEM with simple 3-noded linear triangular elements correctly captures the onset of fracture and its evolution, as shown in several examples of application in two and three dimensions.

  4. Monitoring lava dome changes by means of differential DEMs from TanDEM-X interferometry: Examples from Merapi, Indonesia and Volcán de Colima, Mexico

    NASA Astrophysics Data System (ADS)

    Kubanek, J.; Westerhaus, M.; Heck, B.

    2013-12-01

    derived by TanDEM-X interferometry taken before and after the eruption. Our results reveal that the eruption had led to a topographic change of up to 200 m in the summit area of Merapi. We further show the ability of the TanDEM-X data to observe much smaller topographic changes using Volcán de Colima as second test site. An explosion at the crater rim signaled the end of magma ascent in June 2011. The bistatic TanDEM-X data give important information on this explosion as we can observe topographic changes of up to 20 m and less in the summit area when comparing datasets taken before and after the event. We further analyzed datasets from the beginning of the year 2013 when Colima got active again after a dormant period. Our results indicate that repeated DEMs with great detail and good accuracy are obtainable, enabling a quantitative estimation of volume changes in the summit area of the volcano. As the TanDEM-X mission is an innovative mission, the present study serves as a test to employ data of a new satellite mission in volcano research. An error analysis of the DEMs to evaluate the volume quantifications was therefore also conducted.

  5. A Method for Missile Autopilot Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Eguchi, Hirofumi

    The essential benefit of HardWare-In-the-Loop (HWIL) simulation can be summarized as that the performance of autopilot system is evaluated realistically without the modeling error by using actual hardware such as seeker systems, autopilot systems and servo equipments. HWIL simulation, however, requires very expensive facilities: in these facilities, the target model generator is the indispensable subsystem. In this paper, one example of HWIL simulation facility with a target model generator for RF seeker systems is introduced at first. But this generator has the functional limitation on the line-of-sight angle as almost other generators, then, a test method to overcome the line-of-sight angle limitation is proposed.

  6. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO

  7. Co-seismic landslide topographic analysis based on multi-temporal DEM-A case study of the Wenchuan earthquake.

    PubMed

    Ren, Zhikun; Zhang, Zhuqi; Dai, Fuchu; Yin, Jinhui; Zhang, Huiping

    2013-01-01

    Hillslope instability has been thought to be one of the most important factors for landslide susceptibility. In this study, we apply geomorphic analysis using multi-temporal DEM data and shake intensity analysis to evaluate the topographic characteristics of the landslide areas. There are many geomorphologic analysis methods such as roughness, slope aspect, which are also as useful as slope analysis. The analyses indicate that most of the co-seismic landslides occurred in regions with roughness, hillslope and slope aspect of >1.2, >30, and between 90 and 270, respectively. However, the intersection regions from the above three methods are more accurate than that derived by applying single topographic analysis method. The ground motion data indicates that the co-seismic landslides mainly occurred on the hanging wall side of Longmen Shan Thrust Belt within the up-down and horizontal peak ground acceleration (PGA) contour of 150 PGA and 200 gal, respectively. The comparisons of pre- and post-earthquake DEM data indicate that the medium roughness and slope increased, the roughest and steepest regions decreased after the Wenchuan earthquake. However, slope aspects did not even change. Our results indicate that co-seismic landslides mainly occurred at specific regions of high roughness, southward and steep sloping areas under strong ground motion. Co-seismic landslides significantly modified the local topography, especially the hillslope and roughness. The roughest relief and steepest slope are significantly smoothed; however, the medium relief and slope become rougher and steeper, respectively.

  8. Co-seismic landslide topographic analysis based on multi-temporal DEM-A case study of the Wenchuan earthquake.

    PubMed

    Ren, Zhikun; Zhang, Zhuqi; Dai, Fuchu; Yin, Jinhui; Zhang, Huiping

    2013-01-01

    Hillslope instability has been thought to be one of the most important factors for landslide susceptibility. In this study, we apply geomorphic analysis using multi-temporal DEM data and shake intensity analysis to evaluate the topographic characteristics of the landslide areas. There are many geomorphologic analysis methods such as roughness, slope aspect, which are also as useful as slope analysis. The analyses indicate that most of the co-seismic landslides occurred in regions with roughness, hillslope and slope aspect of >1.2, >30, and between 90 and 270, respectively. However, the intersection regions from the above three methods are more accurate than that derived by applying single topographic analysis method. The ground motion data indicates that the co-seismic landslides mainly occurred on the hanging wall side of Longmen Shan Thrust Belt within the up-down and horizontal peak ground acceleration (PGA) contour of 150 PGA and 200 gal, respectively. The comparisons of pre- and post-earthquake DEM data indicate that the medium roughness and slope increased, the roughest and steepest regions decreased after the Wenchuan earthquake. However, slope aspects did not even change. Our results indicate that co-seismic landslides mainly occurred at specific regions of high roughness, southward and steep sloping areas under strong ground motion. Co-seismic landslides significantly modified the local topography, especially the hillslope and roughness. The roughest relief and steepest slope are significantly smoothed; however, the medium relief and slope become rougher and steeper, respectively. PMID:24171155

  9. DEM modeling of flexible structures against granular material avalanches

    NASA Astrophysics Data System (ADS)

    Lambert, Stéphane; Albaba, Adel; Nicot, François; Chareyre, Bruno

    2016-04-01

    This article presents the numerical modeling of flexible structures intended to contain avalanches of granular and coarse material (e.g. rock slide, a debris slide). The numerical model is based on a discrete element method (YADE-Dem). The DEM modeling of both the flowing granular material and the flexible structure are detailed before presenting some results. The flowing material consists of a dry polydisperse granular material accounting for the non-sphericity of real materials. The flexible structure consists in a metallic net hanged on main cables, connected to the ground via anchors, on both sides of the channel, including dissipators. All these components were modeled as flexible beams or wires, with mechanical parameters defined from literature data. The simulation results are presented with the aim of investigating the variability of the structure response depending on different parameters related to the structure (inclination of the fence, with/without brakes, mesh size opening), but also to the channel (inclination). Results are then compared with existing recommendations in similar fields.

  10. DEM, tide and velocity over sulzberger ice shelf, West Antarctica

    USGS Publications Warehouse

    Baek, S.; Shum, C.K.; Lee, H.; Yi, Y.; Kwoun, Oh-Ig; Lu, Zhiming; Braun, Andreas

    2005-01-01

    Arctic and Antarctic ice sheets preserve more than 77% of the global fresh water and could raise global sea level by several meters if completely melted. Ocean tides near and under ice shelves shifts the grounding line position significantly and are one of current limitations to study glacier dynamics and mass balance. The Sulzberger ice shelf is an area of ice mass flux change in West Antarctica and has not yet been well studied. In this study, we use repeat-pass synthetic aperture radar (SAR) interferometry data from the ERS-1 and ERS-2 tandem missions for generation of a high-resolution (60-m) Digital Elevation Model (DEM) including tidal deformation detection and ice stream velocity of the Sulzberger Ice Shelf. Other satellite data such as laser altimeter measurements with fine foot-prints (70-m) from NASA's ICESat are used for validation and analyses. The resulting DEM has an accuracy of-0.57??5.88 m and is demonstrated to be useful for grounding line detection and ice mass balance studies. The deformation observed by InSAR is found to be primarily due to ocean tides and atmospheric pressure. The 2-D ice stream velocities computed agree qualitatively with previous methods on part of the Ice Shelf from passive microwave remote-sensing data (i.e., LANDSAT). ?? 2005 IEEE.

  11. Multilevel summation method for electrostatic force evaluation.

    PubMed

    Hardy, David J; Wu, Zhe; Phillips, James C; Stone, John E; Skeel, Robert D; Schulten, Klaus

    2015-02-10

    The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation. PMID:25691833

  12. Multilevel Summation Method for Electrostatic Force Evaluation

    PubMed Central

    2015-01-01

    The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation. PMID:25691833

  13. Digital image envelope: method and evaluation

    NASA Astrophysics Data System (ADS)

    Huang, H. K.; Cao, Fei; Zhou, Michael Z.; Mogel, Greg T.; Liu, Brent J.; Zhou, Xiaoqiang

    2003-05-01

    Health data security, characterized in terms of data privacy, authenticity, and integrity, is a vital issue when digital images and other patient information are transmitted through public networks in telehealth applications such as teleradiology. Mandates for ensuring health data security have been extensively discussed (for example The Health Insurance Portability and Accountability Act, HIPAA) and health informatics guidelines (such as the DICOM standard) are beginning to focus on issues of data continue to be published by organizing bodies in healthcare; however, there has not been a systematic method developed to ensure data security in medical imaging Because data privacy and authenticity are often managed primarily with firewall and password protection, we have focused our research and development on data integrity. We have developed a systematic method of ensuring medical image data integrity across public networks using the concept of the digital envelope. When a medical image is generated regardless of the modality, three processes are performed: the image signature is obtained, the DICOM image header is encrypted, and a digital envelope is formed by combining the signature and the encrypted header. The envelope is encrypted and embedded in the original image. This assures the security of both the image and the patient ID. The embedded image is encrypted again and transmitted across the network. The reverse process is performed at the receiving site. The result is two digital signatures, one from the original image before transmission, and second from the image after transmission. If the signatures are identical, there has been no alteration of the image. This paper concentrates in the method and evaluation of the digital image envelope.

  14. Evaluation of methods to assess physical activity

    NASA Astrophysics Data System (ADS)

    Leenders, Nicole Y. J. M.

    Epidemiological evidence has accumulated that demonstrates that the amount of physical activity-related energy expenditure during a week reduces the incidence of cardiovascular disease, diabetes, obesity, and all-cause mortality. To further understand the amount of daily physical activity and related energy expenditure that are necessary to maintain or improve the functional health status and quality of life, instruments that estimate total (TDEE) and physical activity-related energy expenditure (PAEE) under free-living conditions should be determined to be valid and reliable. Without evaluation of the various methods that estimate TDEE and PAEE with the doubly labeled water (DLW) method in females there will be eventual significant limitations on assessing the efficacy of physical activity interventions on health status in this population. A triaxial accelerometer (Tritrac-R3D, (TT)), an uniaxial (Computer Science and Applications Inc., (CSA)) activity monitor, a Yamax-Digiwalker-500sp°ler , (YX-stepcounter), by measuring heart rate responses (HR method) and a 7-d Physical Activity Recall questionnaire (7-d PAR) were compared with the "criterion method" of DLW during a 7-d period in female adults. The DLW-TDEE was underestimated on average 9, 11 and 15% using 7-d PAR, HR method and TT. The underestimation of DLW-PAEE by 7-d PAR was 21% compared to 47% and 67% for TT and YX-stepcounter. Approximately 56% of the variance in DLW-PAEE*kgsp{-1} is explained by the registration of body movement with accelerometry. A larger proportion of the variance in DLW-PAEE*kgsp{-1} was explained by jointly incorporating information from the vertical and horizontal movement measured with the CSA and Tritrac-R3D (rsp2 = 0.87). Although only a small amount of variance in DLW-PAEE*kgsp{-1} is explained by the number of steps taken per day, because of its low cost and ease of use, the Yamax-stepcounter is useful in studies promoting daily walking. Thus, studies involving the

  15. GPS-Based Precision Baseline Reconstruction for the TanDEM-X SAR-Formation

    NASA Technical Reports Server (NTRS)

    Montenbruck, O.; vanBarneveld, P. W. L.; Yoon, Y.; Visser, P. N. A. M.

    2007-01-01

    The TanDEM-X formation employs two separate spacecraft to collect interferometric Synthetic Aperture Radar (SAR) measurements over baselines of about 1 km. These will allow the generation ofa global Digital Elevation Model (DEM) with an relative vertical accuracy of 2-4 m and a 10 m ground resolution. As part of the ground processing, the separation of the SAR antennas at the time of each data take must be reconstructed with a 1 mm accuracy using measurements from two geodetic grade GPS receivers. The paper discusses the TanDEM-X mission as well as the methods employed for determining the interferometric baseline with utmost precision. Measurements collected during the close fly-by of the two GRACE satellites serve as a reference case to illustrate the processing concept, expected accuracy and quality control strategies.

  16. Application of multi-temporal DEM data in calculating the Earth's surface deformation

    NASA Astrophysics Data System (ADS)

    Lan, Qiuping; Fei, Lifan; Liu, Yining; Zhang, Kun

    2009-10-01

    This paper suggests a method of calculating the elevation and the volume change of the terrain based on the multitemporal digital elevation model (DEM) data sets for the same area. Two methods for calculating the surface change are introduced: One is based on the regular square grids (RSG), another uses the triangulated irregular network (TIN) generalized from the original source data by the 3D Douglas-Peucker algorithm so that not only the accuracy of generalization using 3D Douglas-Peucker is verified, but also the kinds of data formats of DEM for this purpose have been expanded. Finally, the formulae used by these two methods are introduced, and the experimental results calculated from the same original DEM data acquired in 1971 and 2000 respectively form the area of Bayanbulak in Xinjiang are compared. The experiments have shown that the results of the two methods are relatively identical even if under the great generalization degree of DEM for the second method. Therefore, it shows that the second method can greatly heighten the efficiency of the calculation while insuring its accuracy.

  17. 10 CFR 963.16 - Postclosure suitability evaluation method.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total...

  18. 10 CFR 963.16 - Postclosure suitability evaluation method.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total...

  19. 10 CFR 963.16 - Postclosure suitability evaluation method.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total...

  20. 10 CFR 963.16 - Postclosure suitability evaluation method.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Postclosure suitability evaluation method. 963.16 Section... Determination, Methods, and Criteria § 963.16 Postclosure suitability evaluation method. (a) DOE will evaluate postclosure suitability using the total system performance assessment method. DOE will conduct a total...

  1. Method for evaluating performance of clinical pharmacists.

    PubMed

    Schumock, G T; Leister, K A; Edwards, D; Wareham, P S; Burkhart, V D

    1990-01-01

    A performance-evaluation process that satisfies Joint Commission on Accreditation of Healthcare Organizations criteria and state policies is described. A three-part, criteria-based, weighted performance-evaluation tool specific for clinical pharmacists was designed for use in two institutions affiliated with the University of Washington. The three parts are self-appraisal and goal setting, peer evaluation, and supervisory evaluation. Objective criteria within each section were weighted to reflect the relative importance of that characteristic to the job that the clinical pharmacist performs. The performance score for each criterion is multiplied by the weighted value to produce an outcome score. The peer evaluation and self-appraisal/goal-setting parts of the evaluation are completed before the formal performance-evaluation interview. The supervisory evaluation is completed during the interview. For this evaluation, supervisors use both the standard university employee performance evaluation form and a set of specific criteria applicable to the clinical pharmacists in these institutions. The first performance evaluations done under this new system were conducted in May 1989. Pharmacists believed that the new system was more objective and allowed more interchange between the manager and the pharmacist. The peer-evaluation part of the system was seen as extremely constructive. This three-part, criteria-based system for evaluation of the job performance of clinical pharmacists could easily be adopted by other pharmacy departments.

  2. Some Methods for Evaluating Program Implementation.

    ERIC Educational Resources Information Center

    Hardy, Roy A.

    An approach to evaluating program implementation is described. This approach includes the development of a project description which includes a structure matrix, sampling from the structure matrix, and preparing an implementation evaluation plan. The implementation evaluation plan should include: (1) verification of implementation of planned…

  3. Multidisciplinary eHealth Survey Evaluation Methods

    ERIC Educational Resources Information Center

    Karras, Bryant T.; Tufano, James T.

    2006-01-01

    This paper describes the development process of an evaluation framework for describing and comparing web survey tools. We believe that this approach will help shape the design, development, deployment, and evaluation of population-based health interventions. A conceptual framework for describing and evaluating web survey systems will enable the…

  4. A quantitative method for silica flux evaluation

    NASA Astrophysics Data System (ADS)

    Schonewille, R. H.; O'Connell, G. J.; Toguri, J. M.

    1993-02-01

    In the smelting of copper and copper/nickel concentrates, the role of silica flux is to aid in the removal of iron by forming a slag phase. Alternatively, the role of flux may be regarded as a means of controlling the formation of magnetite, which can severely hinder the operation of a furnace. To adequately control the magnetite level, the flux must react rapidly with all of the FeO within the bath. In the present study, a rapid method for silica flux evaluation that can be used directly in the smelter has been developed. Samples of flux are mixed with iron sulfide and magnetite and then smelted at a temperature of 1250 °C. Argon was swept over the reaction mixture and analyzed continuously for sulfur dioxide. The sulfur dioxide concentration with time was found to contain two peaks, the first one being independent of the flux content of the sample. A flux quality parameter has been defined as the height-to-time ratio of the second peak. The value of this parameter for pure silica is 5100 ppm/min. The effects of silica content, silica particle size, and silicate mineralogy were investigated. It was found that a limiting flux quality is achieved for particle sizes less than 0.1 mm in diameter and that fluxes containing feldspar are generally of a poorer quality. The relative importance of free silica and melting point was also studied using synthetic flux mixtures, with free silica displaying the strongest effect.

  5. The Vulcan Project: Methods, Results, and Evaluation

    NASA Astrophysics Data System (ADS)

    Gurney, K. R.; Mendoza, D.; Miller, C.; Ojima, D.; Knox, S.; Corbin, K.; Denning, S.; Fischer, M.; de La Rue Du Can, S.

    2008-12-01

    The Vulcan Project has quantified fossil fuel CO2 for the United States at the sub-county spatial scale, hourly for the year 2002. It approached quantification of fossil fuel CO2 from a novel perspective: leveraging the information already contained within the National Emissions Inventory for the assessment of nationally regulated air pollution. By utilizing the inventory emissions of carbon monoxide and nitrogen oxides combined with emissions factors, specific to combustion device technology, we have calculated CO2 emissions for industrial point sources, powerplants, mobile sources, residential and commercial sectors with information on fuel used and source classification information. In this presentation, we provide an overview of the Vulcan inventory methods, results and evaluation of the Vulcan inventory by comparing to state-level inventories and other independent estimates. The inventory has been recently placed onto Google Earth and we will provide a preview of this capability. Finally, we will present the result of fossil fuel CO2 concentration as transported by an atmospheric transport model and a comparison to in situ CO2 observations.

  6. An Investigation of Transgressive Deposits in Late Pleistocene Lake Bonneville using GPR and UAV-produced DEMs.

    NASA Astrophysics Data System (ADS)

    Schide, K.; Jewell, P. W.; Oviatt, C. G.; Jol, H. M.

    2015-12-01

    Lake Bonneville was the largest of the Pleistocene pluvial lakes that once filled the Great Basin of the interior western United States. Its two most prominent shorelines, Bonneville and Provo, are well documented but many of the lake's intermediate shoreline features have yet to be studied. These transgressive barriers and embankments mark short-term changes in the regional water budget and thus represent a proxy for local climate change. The internal and external structures of these features are analyzed using the following methods: ground penetrating radar, 5 meter auto-correlated DEMs, 1-meter DEMs generated from LiDAR, high-accuracy handheld GPS, and 3D imagery collected with an unmanned aerial vehicle. These methods in mapping, surveying, and imaging provide a quantitative analysis of regional sediment availability, transportation, and deposition as well as changes in wave and wind energy. These controls help define climate thresholds and rates of landscape evolution in the Great Basin during the Pleistocene that are then evaluated in the context of global climate change.

  7. Sediment Transport Simulations Coupling DEM with RANS Fluid Solver in Multi- dimensions

    NASA Astrophysics Data System (ADS)

    Calantoni, J.; Torres-Freyermuth, A.; Hsu, T.

    2008-12-01

    Multiphase simulations of the sediment-water interface in a wave bottom boundary layer are accomplished by using a Reynolds averaged Navier Stokes (RANS) fluid solver for water motions coupled to the discrete element method (DEM) for modeling the motions of individual sediment grains. Turbulence closure in the ensemble-averaged fluid-phase equations uses balance equations for fluid turbulent kinetic energy and its dissipation rate. Both 1DV and 2DV implementations of the RANS fluid solver have been coupled to the DEM. In both cases, the DEM is fully three-dimensional where sediment particles have spherical shape and point contacts are assumed with normal and tangential forces at the contact point between particle pairs modeled with springs and friction, respectively. Coupling between sediment-water phases varies from simple one-way coupling where fluid drives sediment motions with no feedback from the sediment, up to fully coupled continuity equations and turbulence closure as well as in the fluid momentum equations where Newton's Third Law is strictly enforced at every fluid time step. Fluid-particle interaction forces include drag, added mass, pressure gradient forces, and turbulent suspension implemented through an eddy-particle interaction model based on a random walk. The 1DV DEM-RANS coupled model was used to simulate sheet flow transport conditions under oscillatory flows. The 2DV DEM-RANS coupled model was used to simulate suspension and transport over small-scale sand ripples. For all cases, the DEM used coarse to fine (0.4 mm - 0.2 mm diameter) sized sediments where grain-grain interactions model viscous dissipation through an effective coefficient of restitution as a function of the collisional Stokes number estimated from published laboratory measurements of particle-particle and particle-wall collisions. Initial comparisons were made with laboratory U-tube measurements for bulk transport rates and time-dependent concentration profiles for sheet flow

  8. Method for evaluation of laboratory craters using crater detection algorithm for digital topography data

    NASA Astrophysics Data System (ADS)

    Salamunićcar, Goran; Vinković, Dejan; Lončarić, Sven; Vučina, Damir; Pehnec, Igor; Vojković, Marin; Gomerčić, Mladen; Hercigonja, Tomislav

    In our previous work the following has been done: (1) the crater detection algorithm (CDA) based on digital elevation model (DEM) has been developed and the GT-115225 catalog has been assembled [GRS, 48 (5), in press, doi:10.1109/TGRS.2009.2037750]; and (2) the results of comparison between explosion-induced laboratory craters in stone powder surfaces and GT-115225 have been presented using depth/diameter measurements [41stLPSC, Abstract #1428]. The next step achievable using the available technology is to create 3D scans of such labo-ratory craters, in order to compare different properties with simple Martian craters. In this work, we propose a formal method for evaluation of laboratory craters, in order to provide objective, measurable and reproducible estimation of the level of achieved similarity between these laboratory and real impact craters. In the first step, the section of MOLA data for Mars (or SELENE LALT for Moon) is replaced with one or several 3D-scans of laboratory craters. Once embedment was done, the CDA can be used to find out whether this laboratory crater is similar enough to real craters, as to be recognized as a crater by the CDA. The CDA evaluation using ROC' curve represents how true detection rate (TDR=TP/(TP+FN)=TP/GT) depends on the false detection rate (FDR=FP/(TP+FP)). Using this curve, it is now possible to define the measure of similarity between laboratory and real impact craters, as TDR or FDR value, or as a distance from the bottom-right origin of the ROC' curve. With such an approach, the reproducible (formally described) method for evaluation of laboratory craters is provided.

  9. DEM time series of an agricultural watershed

    NASA Astrophysics Data System (ADS)

    Pineux, Nathalie; Lisein, Jonathan; Swerts, Gilles; Degré, Aurore

    2014-05-01

    In agricultural landscape soil surface evolves notably due to erosion and deposition phenomenon. Even if most of the field data come from plot scale studies, the watershed scale seems to be more appropriate to understand them. Currently, small unmanned aircraft systems and images treatments are improving. In this way, 3D models are built from multiple covering shots. When techniques for large areas would be to expensive for a watershed level study or techniques for small areas would be too time consumer, the unmanned aerial system seems to be a promising solution to quantify the erosion and deposition patterns. The increasing technical improvements in this growth field allow us to obtain a really good quality of data and a very high spatial resolution with a high Z accuracy. In the center of Belgium, we equipped an agricultural watershed of 124 ha. For three years (2011-2013), we have been monitoring weather (including rainfall erosivity using a spectropluviograph), discharge at three different locations, sediment in runoff water, and watershed microtopography through unmanned airborne imagery (Gatewing X100). We also collected all available historical data to try to capture the "long-term" changes in watershed morphology during the last decades: old topography maps, soil historical descriptions, etc. An erosion model (LANDSOIL) is also used to assess the evolution of the relief. Short-term evolution of the surface are now observed through flights done at 200m height. The pictures are taken with a side overlap equal to 80%. To precisely georeference the DEM produced, ground control points are placed on the study site and surveyed using a Leica GPS1200 (accuracy of 1cm for x and y coordinates and 1.5cm for the z coordinate). Flights are done each year in December to have an as bare as possible ground surface. Specific treatments are developed to counteract vegetation effect because it is know as key sources of error in the DEM produced by small unmanned aircraft

  10. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  11. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

  12. Discrete element modelling (DEM) input parameters: understanding their impact on model predictions using statistical analysis

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.

    2015-09-01

    Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.

  13. Developmental Eye Movement (DEM) Test Norms for Mandarin Chinese-Speaking Chinese Children

    PubMed Central

    Tong, Meiling; Zhang, Min; Li, Tingting; Xu, Yaqin; Guo, Xirong; Hong, Qin; Chi, Xia

    2016-01-01

    The Developmental Eye Movement (DEM) test is commonly used as a clinical visual-verbal ocular motor assessment tool to screen and diagnose reading problems at the onset. No established norm exists for using the DEM test with Mandarin Chinese-speaking Chinese children. This study aims to establish the normative values of the DEM test for the Mandarin Chinese-speaking population in China; it also aims to compare the values with three other published norms for English-, Spanish-, and Cantonese-speaking Chinese children. A random stratified sampling method was used to recruit children from eight kindergartens and eight primary schools in the main urban and suburban areas of Nanjing. A total of 1,425 Mandarin Chinese-speaking children aged 5 to 12 years took the DEM test in Mandarin Chinese. A digital recorder was used to record the process. All of the subjects completed a symptomatology survey, and their DEM scores were determined by a trained tester. The scores were computed using the formula in the DEM manual, except that the “vertical scores” were adjusted by taking the vertical errors into consideration. The results were compared with the three other published norms. In our subjects, a general decrease with age was observed for the four eye movement indexes: vertical score, adjusted horizontal score, ratio, and total error. For both the vertical and adjusted horizontal scores, the Mandarin Chinese-speaking children completed the tests much more quickly than the norms for English- and Spanish-speaking children. However, the same group completed the test slightly more slowly than the norms for Cantonese-speaking children. The differences in the means were significant (P<0.001) in all age groups. For several ages, the scores obtained in this study were significantly different from the reported scores of Cantonese-speaking Chinese children (P<0.005). Compared with English-speaking children, only the vertical score of the 6-year-old group, the vertical

  14. Developmental Eye Movement (DEM) Test Norms for Mandarin Chinese-Speaking Chinese Children.

    PubMed

    Xie, Yachun; Shi, Chunmei; Tong, Meiling; Zhang, Min; Li, Tingting; Xu, Yaqin; Guo, Xirong; Hong, Qin; Chi, Xia

    2016-01-01

    The Developmental Eye Movement (DEM) test is commonly used as a clinical visual-verbal ocular motor assessment tool to screen and diagnose reading problems at the onset. No established norm exists for using the DEM test with Mandarin Chinese-speaking Chinese children. This study aims to establish the normative values of the DEM test for the Mandarin Chinese-speaking population in China; it also aims to compare the values with three other published norms for English-, Spanish-, and Cantonese-speaking Chinese children. A random stratified sampling method was used to recruit children from eight kindergartens and eight primary schools in the main urban and suburban areas of Nanjing. A total of 1,425 Mandarin Chinese-speaking children aged 5 to 12 years took the DEM test in Mandarin Chinese. A digital recorder was used to record the process. All of the subjects completed a symptomatology survey, and their DEM scores were determined by a trained tester. The scores were computed using the formula in the DEM manual, except that the "vertical scores" were adjusted by taking the vertical errors into consideration. The results were compared with the three other published norms. In our subjects, a general decrease with age was observed for the four eye movement indexes: vertical score, adjusted horizontal score, ratio, and total error. For both the vertical and adjusted horizontal scores, the Mandarin Chinese-speaking children completed the tests much more quickly than the norms for English- and Spanish-speaking children. However, the same group completed the test slightly more slowly than the norms for Cantonese-speaking children. The differences in the means were significant (P<0.001) in all age groups. For several ages, the scores obtained in this study were significantly different from the reported scores of Cantonese-speaking Chinese children (P<0.005). Compared with English-speaking children, only the vertical score of the 6-year-old group, the vertical-horizontal time

  15. Developmental Eye Movement (DEM) Test Norms for Mandarin Chinese-Speaking Chinese Children.

    PubMed

    Xie, Yachun; Shi, Chunmei; Tong, Meiling; Zhang, Min; Li, Tingting; Xu, Yaqin; Guo, Xirong; Hong, Qin; Chi, Xia

    2016-01-01

    The Developmental Eye Movement (DEM) test is commonly used as a clinical visual-verbal ocular motor assessment tool to screen and diagnose reading problems at the onset. No established norm exists for using the DEM test with Mandarin Chinese-speaking Chinese children. This study aims to establish the normative values of the DEM test for the Mandarin Chinese-speaking population in China; it also aims to compare the values with three other published norms for English-, Spanish-, and Cantonese-speaking Chinese children. A random stratified sampling method was used to recruit children from eight kindergartens and eight primary schools in the main urban and suburban areas of Nanjing. A total of 1,425 Mandarin Chinese-speaking children aged 5 to 12 years took the DEM test in Mandarin Chinese. A digital recorder was used to record the process. All of the subjects completed a symptomatology survey, and their DEM scores were determined by a trained tester. The scores were computed using the formula in the DEM manual, except that the "vertical scores" were adjusted by taking the vertical errors into consideration. The results were compared with the three other published norms. In our subjects, a general decrease with age was observed for the four eye movement indexes: vertical score, adjusted horizontal score, ratio, and total error. For both the vertical and adjusted horizontal scores, the Mandarin Chinese-speaking children completed the tests much more quickly than the norms for English- and Spanish-speaking children. However, the same group completed the test slightly more slowly than the norms for Cantonese-speaking children. The differences in the means were significant (P<0.001) in all age groups. For several ages, the scores obtained in this study were significantly different from the reported scores of Cantonese-speaking Chinese children (P<0.005). Compared with English-speaking children, only the vertical score of the 6-year-old group, the vertical-horizontal time

  16. Extraction of Hydrological Proximity Measures from DEMs using Parallel Processing

    SciTech Connect

    Tesfa, Teklu K.; Tarboton, David G.; Watson, Daniel W.; Schreuders, Kimberly A.; Baker, Matthew M.; Wallace, Robert M.

    2011-12-01

    Land surface topography is one of the most important terrain properties which impact hydrological, geomorphological, and ecological processes active on a landscape. In our previous efforts to develop a soil depth model based upon topographic and land cover variables, we extracted a set of hydrological proximity measures (HPMs) from a Digital Elevation Model (DEM) as potential explanatory variables for soil depth. These HPMs may also have other, more general modeling applicability in hydrology, geomorphology and ecology, and so are described here from a general perspective. The HPMs we derived are variations of the distance up to ridge points (cells with no incoming flow) and variations of the distance down to stream points (cells with a contributing area greater than a threshold), following the flow path. These HPMs were computed using the D-infinity flow model that apportions flow between adjacent neighbors based on the direction of steepest downward slope on the eight triangular facets constructed in a 3 x 3 grid cell window using the center cell and each pair of adjacent neighboring grid cells in turn. The D-infinity model typically results in multiple flow paths between 2 points on the topography, with the result that distances may be computed as the minimum, maximum or average of the individual flow paths. In addition, each of the HPMs, are calculated vertically, horizontally, and along the land surface. Previously, these HPMs were calculated using recursive serial algorithms which suffered from stack overflow problems when used to process large datasets, limiting the size of DEMs that could be analyzed using that method to approximately 7000 x 7000 cells. To overcome this limitation, we developed a message passing interface (MPI) parallel approach for calculating these HPMs. The parallel algorithms of the HPMs spatially partition the input grid into stripes which are each assigned to separate processes for computation. Each of those processes then uses a

  17. Evaluation of Different Soil Carbon Determination Methods

    SciTech Connect

    Chatterjee, Dr Amitava; Lal, Dr R; Wielopolski, Dr L; Martin, Madhavi Z; Ebinger, Dr Michael H

    2009-01-01

    Determining soil carbon (C) with high precision is an essential requisite for the success of the terrestrial C sequestration program. The informed choice of management practices for different terrestrial ecosystems rests upon accurately measuring the potential for C sequestration. Numerous methods are available for assessing soil C. Chemical analysis of field-collected samples using a dry combustion method is regarded as the standard method. However, conventional sampling of soil and their subsequent chemical analysis is expensive and time consuming. Furthermore, these methods are not sufficiently sensitive to identify small changes over time in response to alterations inmanagement practices or changes in land use. Presently, several different in situ analytic methods are being developed purportedly offering increased accuracy, precision and cost-effectiveness over traditional ex situ methods. We consider that, at this stage, a comparative discussion of different soil C determination methods will improve the understanding needed to develop a standard protocol.

  18. Aquifer water abundance evaluation using a fuzzy- comprehensive weighting method

    NASA Astrophysics Data System (ADS)

    Wei, Z.

    2016-08-01

    Aquifer water abundance evaluation is a highly relevant issue that has been researched for many years. Despite prior research, problems with the conventional evaluation method remain. This paper establishes an aquifer water abundance evaluation method that combines fuzzy evaluation with a comprehensive weighting method to overcome both the subjectivity and lack of conformity in determining weight by pure data analysis alone. First, this paper introduces the principle of a fuzzy-comprehensive weighting method. Second, the example of well field no. 3 (of a coalfield) is used to illustrate the method's process. The evaluation results show that this method is can more suitably meet the real requirements of aquifer water abundance assessment, leading to more precise and accurate evaluations. Ultimately, this paper provides a new method for aquifer water abundance evaluation.

  19. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  20. A Ranking Method for Evaluating Constructed Responses

    ERIC Educational Resources Information Center

    Attali, Yigal

    2014-01-01

    This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…

  1. Optical evaluation methods in particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Farrell, P. V.

    Application of particle image velocimetry (PIV) techniques for measurement of fluid velocities typically requires two steps. The first of these is the photography step, in which two exposures of a particle field, displaced between the exposures, are taken. The second step is the evaluation of the double-exposure particle pattern and production of appropriate particle velocities. Each of these steps involves optimization, which is usually specific to the experiment being conducted, and there is significant interaction between photographic parameters and evaluation characteristics. This paper will focus on the latter step, that of evaluation of the double-exposure photograph. In several parts of a PIV system, some performance advantage may be obtained by increasing use of optical processing over conventional digital image processing. Among the processes for which a performance advantage may be obtained are parallel or multiplex image interrogation and the evaluation of the Young's fringe pattern obtained from the scattered pattern from the double-exposure photograph. This paper will discuss parallel image interrogation and compare the performance of optical and numerical Fourier transform analysis of Young's fringes using speckle images.

  2. GENERAL METHODS FOR REMEDIAL PERFORMANCE EVALUATIONS

    EPA Science Inventory

    This document was developed by an EPA-funded project to explain technical considerations and principles necessary to evaluated the performance of ground-water contamination remediations at hazardous waste sites. This is neither a "cookbook", nor an encyclopedia of recommended fi...

  3. Evaluation of temperament scoring methods for beef cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this study was to evaluate methods of temperament scoring. Crossbred (n=228) calves were evaluated for temperament by an individual evaluator at weaning by two methods of scoring: 1) pen score (1 to 5 scale, with higher scores indicating increasing degree of nervousness, aggressiven...

  4. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  5. Evaluation Method of Power Recived Contract

    NASA Astrophysics Data System (ADS)

    Tani, Shigeyuki; Nakagawa, Tadasuke; Akatsu, Masaharu; Komoda, Norihisa

    The electric power liberalization will expand. The method of judging the service selection is requested by customers. The method of comparing each energy unit price based on the average demanded is general. However, there are risks in customer's energy demand in the future. In the supply contract, there is freedom of contract changes according to demand changes. By this contract changes customers can avoid the risks. In this paper, we think that there was value in contract changes and we proposed the new method for analyzing the economy of this value.

  6. Lava emplacements at Shiveluch volcano (Kamchatka) from June 2011 to September 2014 observed by TanDEM-X SAR-Interferometry

    NASA Astrophysics Data System (ADS)

    Heck, Alexandra; Kubanek, Julia; Westerhaus, Malte; Gottschämmer, Ellen; Heck, Bernhard; Wenzel, Friedemann

    2016-04-01

    As part of the Ring of Fire, Shiveluch volcano is one of the largest and most active volcanoes on Kamchatka Peninsula. During the Holocene, only the southern part of the Shiveluch massive was active. Since the last Plinian eruption in 1964, the activity of Shiveluch is characterized by periods of dome growth and explosive eruptions. The recent active phase began in 1999 and continues until today. Due to the special conditions at active volcanoes, such as smoke development, danger of explosions or lava flows, as well as poor weather conditions and inaccessible area, it is difficult to observe the interaction between dome growth, dome destruction, and explosive eruptions in regular intervals. Consequently, a reconstruction of the eruption processes is hardly possible, though important for a better understanding of the eruption mechanism as well as for hazard forecast and risk assessment. A new approach is provided by the bistatic radar data acquired by the TanDEM-X satellite mission. This mission is composed of two nearly identical satellites, TerraSAR-X and TanDEM-X, flying in a close helix formation. On one hand, the radar signals penetrate clouds and partially vegetation and snow considering the average wavelength of about 3.1 cm. On the other hand, in comparison with conventional InSAR methods, the bistatic radar mode has the advantage that there are no difficulties due to temporal decorrelation. By interferometric evaluation of the simultaneously recorded SAR images, it is possible to calculate high-resolution digital elevation models (DEMs) of Shiveluch volcano and its surroundings. Furthermore, the short recurrence interval of 11 days allows to generate time series of DEMs, with which finally volumetric changes of the dome and of lava flows can be determined, as well as lava effusion rates. Here, this method is used at Shiveluch volcano based on data acquired between June 2011 and September 2014. Although Shiveluch has a fissured topography with steep slopes

  7. A simplified DEM-CFD approach for pebble bed reactor simulations

    SciTech Connect

    Li, Y.; Ji, W.

    2012-07-01

    In pebble bed reactors (PBR's), the pebble flow and the coolant flow are coupled with each other through coolant-pebble interactions. Approaches with different fidelities have been proposed to simulate similar phenomena. Coupled Discrete Element Method-Computational Fluid Dynamics (DEM-CFD) approaches are widely studied and applied in these problems due to its good balance between efficiency and accuracy. In this work, based on the symmetry of the PBR geometry, a simplified 3D-DEM/2D-CFD approach is proposed to speed up the DEM-CFD simulation without significant loss of accuracy. Pebble flow is simulated by a full 3-D DEM, while the coolant flow field is calculated with a 2-D CFD simulation by averaging variables along the annular direction in the cylindrical geometry. Results show that this simplification can greatly enhance the efficiency for cylindrical core, which enables further inclusion of other physics such as thermal and neutronic effect in the multi-physics simulations for PBR's. (authors)

  8. Extracting DEM from airborne X-band data based on PolInSAR

    NASA Astrophysics Data System (ADS)

    Hou, X. X.; Huang, G. M.; Zhao, Z.

    2015-06-01

    Polarimetric Interferometric Synthetic Aperture Radar (PolInSAR) is a new trend of SAR remote sensing technology which combined polarized multichannel information and Interferometric information. It is of great significance for extracting DEM in some regions with low precision of DEM such as vegetation coverage area and building concentrated area. In this paper we describe our experiments with high-resolution X-band full Polarimetric SAR data acquired by a dual-baseline interferometric airborne SAR system over an area of Danling in southern China. Pauli algorithm is used to generate the double polarimetric interferometry data, Singular Value Decomposition (SVD), Numerical Radius (NR) and Phase diversity (PD) methods are used to generate the full polarimetric interferometry data. Then we can make use of the polarimetric interferometric information to extract DEM with processing of pre filtering , image registration, image resampling, coherence optimization, multilook processing, flat-earth removal, interferogram filtering, phase unwrapping, parameter calibration, height derivation and geo-coding. The processing system named SARPlore has been exploited based on VC++ led by Chinese Academy of Surveying and Mapping. Finally compared optimization results with the single polarimetric interferometry, it has been observed that optimization ways can reduce the interferometric noise and the phase unwrapping residuals, and improve the precision of DEM. The result of full polarimetric interferometry is better than double polarimetric interferometry. Meanwhile, in different terrain, the result of full polarimetric interferometry will have a different degree of increase.

  9. The Study on Educational Technology Abilities Evaluation Method

    NASA Astrophysics Data System (ADS)

    Jing, Duan

    The traditional methods used to evaluate the test, the test did not really measure that we want to measuring things. Test results and can not serve as a basis for evaluation, so it was worth the natural result of its evaluation of weighing. This system is full use of technical means of education, based on education, psychological theory, to evaluate the object-based, evaluation tools, evaluation of secondary teachers to primary and secondary school teachers in educational technology as the goal, using a variety of evaluation of side France, from various angles established an informal evaluation system.

  10. Aspects of dem Generation from Uas Imagery

    NASA Astrophysics Data System (ADS)

    Greiwe, A.; Gehrke, R.; Spreckels, V.; Schlienkamp, A.

    2013-08-01

    Since a few years, micro UAS (unmanned aerial systems) with vertical take off and landing capabilities like quadro- or octocopter are used as sensor platform for Aerophotogrammetry. Since the restricted payload of micro UAS with a total weight up of 5 kg (payload only up to 1.5 kg), these systems are often equipped with small format cameras. These cameras can be classified as amateur cameras and it is often the case, that these systems do not meet the requirements of a geometric stable camera for photogrammetric measurement purposes. However, once equipped with a suitable camera system, an UAS is an interesting alternative to expensive manned flights for small areas. The operating flight height of the above described UAS is about 50 up to 150 meters above ground level. This low flight height lead on the one hand to a very high spatial resolution of the aerial imagery. Depending on the cameras focal length and the sensor's pixel size, the ground sampling distance (GSD) is usually about 1 up to 5 cm. This high resolution is useful especially for the automatic generation of homologous tie-points, which are a precondition for the image alignment (bundle block adjustment). On the other hand, the image scale depends on the object's height and the UAV operating height. Objects like mine heaps or construction sites show high variations of the object's height. As a result, operating the UAS with a constant flying height will lead to high variations in the image scale. For some processing approaches this will lead to problems e.g. the automatic tie-point generation in stereo image pairs. As precondition to all DEM generating approaches, first of all a geometric stable camera, sharp images are essentially. Well known calibration parameters are necessary for the bundle adjustment, to control the exterior orientations. It can be shown, that a simultaneous on site camera calibration may lead to misaligned aerial images. Also, the success rate of an automatic tie-point generation

  11. Evaluation of Electrochemical Methods for Electrolyte Characterization

    NASA Technical Reports Server (NTRS)

    Heidersbach, Robert H.

    2001-01-01

    This report documents summer research efforts in an attempt to develop an electrochemical method of characterizing electrolytes. The ultimate objective of the characterization would be to determine the composition and corrosivity of Martian soil. Results are presented using potentiodynamic scans, Tafel extrapolations, and resistivity tests in a variety of water-based electrolytes.

  12. Test methods for evaluating reformulated fuels

    SciTech Connect

    Croudace, M.C.

    1994-12-31

    The US Environmental Protection Agency (EPA) introduced regulations in the 1989 Clean Air Act Amendment governing the reformulation of gasoline and diesel fuels to improve air quality. These statutes drove the need for a fast and accurate method for analyzing product composition, especially aromatic and oxygenate content. The current method, gas chromatography, is slow, expensive, non portable, and requires a trained chemist to perform the analysis. The new mid-infrared spectroscopic method uses light to identify and quantify the different components in fuels. Each individual fuel component absorbs a specific wavelength of light depending on the molecule`s unique chemical structure. The quantity of light absorbed is proportional to the concentration of that fuel component in the mixture. The mid-infrared instrument has significant advantages; it is easy to use, rugged, portable, fully automated and cost effective. It can be used to measure multiple oxygenate or aromatic components in unknown fuel mixtures. Regulatory agencies have begun using this method in field compliance testing; petroleum refiners and marketers use it to monitor compliance, product quality and blending accuracy.

  13. Performance Evaluation Methods for Assistive Robotic Technology

    NASA Astrophysics Data System (ADS)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  14. The Evaluation of Flammability Properties Regarding Testing Methods

    NASA Astrophysics Data System (ADS)

    Osvaldová, Linda Makovická; Gašpercová, Stanislava

    2015-12-01

    In this paper, we address the historical comparison methods with current methods for the assessment of flammability characteristics for materials an especially for wood, wood components and wooden buildings. Nowadays in European Union brings harmonization in evaluated of standards into each European country and try to make one concept of evaluated the flammability properties. In each European country to the one standard level which will be used by evaluation of materials regarding flammability. In our article we focused mainly on improving the evaluation methods in terms of flammability characteristics of using materials at building industry. In the article we present examples of different assessment methods at their own test methods in terms of fire prevention. On the base of old compared of materials by STN, BS and DIN methods for testing materials on fire and new methods of evaluating the flammability properties regarding EU standards before and after starting the flash over.

  15. Evaluations of Three Methods for Remote Training

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Chmielewski, C.; Pandya, A.; Adolf, J.; Whitmore, M.; Berman, A.; Maida, J.

    1999-01-01

    Long duration space missions require a change in training methods and technologies. For Shuttle missions, crew members could train for all the planned procedures, and carry documentation of planned procedures for a variety of contingencies. As International Space Station (ISS) missions of three months or longer are carried out, many more tasks will need to be performed for which little or no training was received prior to launch. Eventually, exploration missions will last several years, and communications with Earth will have long time delays or be impossible at times. This series of three studies was performed to identify the advantages and disadvantages of three types of training for self-instruction: video-conferencing; multimedia; and virtual reality. These studies each compared two types of training methods, on two different types of tasks. In two of the studies, the subject's were in an isolated, confined environment analogous to space flight; the third study was performed in a laboratory.

  16. Evaluation of toothbrush disinfection via different methods.

    PubMed

    Basman, Adil; Peker, Ilkay; Akca, Gulcin; Alkurt, Meryem Toraman; Sarikir, Cigdem; Celik, Irem

    2016-01-01

    The aim of this study was to compare the efficacy of using a dishwasher or different chemical agents, including 0.12% chlorhexidine gluconate, 2% sodium hypochlorite (NaOCl), a mouthrinse containing essential oils and alcohol, and 50% white vinegar, for toothbrush disinfection. Sixty volunteers were divided into five experimental groups and one control group (n = 10). Participants brushed their teeth using toothbrushes with standard bristles, and they disinfected the toothbrushes according to instructed methods. Bacterial contamination of the toothbrushes was compared between the experimental groups and the control group. Data were analyzed by Kruskal-Wallis and Duncan's multiple range tests, with 95% confidence intervals for multiple comparisons. Bacterial contamination of toothbrushes from individuals in the experimental groups differed from those in the control group (p < 0.05). The most effective method for elimination of all tested bacterial species was 50% white vinegar, followed in order by 2% NaOCl, mouthrinse containing essential oils and alcohol, 0.12% chlorhexidine gluconate, dishwasher use, and tap water (control). The results of this study show that the most effective method for disinfecting toothbrushes was submersion in 50% white vinegar, which is cost-effective, easy to access, and appropriate for household use.

  17. Reanalysis of the DEMS Nested Case-Control Study of Lung Cancer and Diesel Exhaust: Suitability for Quantitative Risk Assessment

    PubMed Central

    Crump, Kenny S; Van Landingham, Cynthia; Moolgavkar, Suresh H; McClellan, Roger

    2015-01-01

    The International Agency for Research on Cancer (IARC) in 2012 upgraded its hazard characterization of diesel engine exhaust (DEE) to “carcinogenic to humans.” The Diesel Exhaust in Miners Study (DEMS) cohort and nested case-control studies of lung cancer mortality in eight U.S. nonmetal mines were influential in IARC’s determination. We conducted a reanalysis of the DEMS case-control data to evaluate its suitability for quantitative risk assessment (QRA). Our reanalysis used conditional logistic regression and adjusted for cigarette smoking in a manner similar to the original DEMS analysis. However, we included additional estimates of DEE exposure and adjustment for radon exposure. In addition to applying three DEE exposure estimates developed by DEMS, we applied six alternative estimates. Without adjusting for radon, our results were similar to those in the original DEMS analysis: all but one of the nine DEE exposure estimates showed evidence of an association between DEE exposure and lung cancer mortality, with trend slopes differing only by about a factor of two. When exposure to radon was adjusted, the evidence for a DEE effect was greatly diminished, but was still present in some analyses that utilized the three original DEMS DEE exposure estimates. A DEE effect was not observed when the six alternative DEE exposure estimates were utilized and radon was adjusted. No consistent evidence of a DEE effect was found among miners who worked only underground. This article highlights some issues that should be addressed in any use of the DEMS data in developing a QRA for DEE. PMID:25857246

  18. Organic ion exchange resin separation methods evaluation

    SciTech Connect

    Witwer, K.S.

    1998-05-27

    This document describes testing to find effective methods to separate Organic Ion Exchange Resin (OIER) from a sludge simulant. This task supports a comprehensive strategy for treatment and processing of K-Basin sludge. The simulant to be used resembles sludge that has accumulated in the 105KE and 105KW Basins in the 1OOK area of the Hanford Site. The sludge is an accumulation of fuel element corrosion products, organic and inorganic ion exchange materials, canister gasket materials, iron and aluminum corrosion products, sand, dirt, and other minor amounts of organic matter.

  19. Evaluation criteria and test methods for electrochromic windows

    SciTech Connect

    Czanderna, A.W. ); Lampert, C.M. )

    1990-07-01

    Report summarizes the test methods used for evaluating electrochromic (EC) windows, and summarizes what is known about degradation of their performance, and recommends methods and procedures for advancing EC windows for buildings applications. 77 refs., 13 figs., 6 tabs.

  20. FIELD VALIDATION OF SEDIMENT TOXCITY IDENTIFCATION AND EVALUATION METHODS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  1. Testing DEM-based atmospheric corrections to SAR interferograms over the Los Angeles basin

    NASA Astrophysics Data System (ADS)

    Jin, L.; Funning, G. J.; Floyd, M. A.

    2009-12-01

    Atmospheric water vapor delay is the major source of noise in SAR interferograms. It is considered a prime disadvantage of high-precision InSAR technology. Without the atmospheric delay being corrected, it is hard to see any slow surface movements of the ground, e.g. fault creep; and it is impossible to validate Permanent Scatters InSAR method either, which assumes that water vapor can be estimated and removed by considering time series of interferograms. As long as the water vapor delay is estimated or measured, not only can we solve the previous two problems, but also reduce the errors in geodetic measurements, and improve the accuracy in generating Digital Elevation Models (DEMs) with ERS-1/2 tandem data. In order to reduce water vapor delay, there are some possible solutions using different data sets, including GPS, MODIS, and MERIS etc. This project is a method based on DEMs. It intends to find the relationship between topography and atmospheric water vapor delay in SAR interferograms so the water vapor signals can be reduced in interferograms. It is assumed that the atmospheric water vapor delay is linearly related to the topography over a certain distance. For example, the low phase delay appears over the places where the elevation is high; or low elevation leads to high phase delay. We tested 17 interferograms over the LA basin -- 5 from the ERS-1/2 tandem mission between 1995 and 1996; 12 from EnviSAT between 2005 and 2007 with the time spans from 35 days to 8 months. The basic idea was to divide each interferogram and DEM into a series of small windows. Then the coefficients of the relationship between the phase and the corresponding elevation in each same window were found. After interpolating these coefficients across the interferogram area, we obtained the water vapor correction by multiplying the coefficients by elevations. In this project, we tested three interpolation methods -- linear, spline, and cubic, but we found that there was little

  2. Generic method for aero-optic evaluations.

    PubMed

    Frumker, Eugene; Pade, Offer

    2004-06-01

    The effect of aerodynamic flow on the performance of an airborne optical system is becoming a critical issue in the development of electro-optic systems. A novel technique for aero-optic calculations that is based on commercially available software is presented. The optically relevant data from the computational fluid dynamics results are transformed into an index-of-refraction field and introduced as an input to the optical code. The data do not necessarily have to be presented in analytical form; instead it is introduced in a most general form as a discrete set of values located at a nonuniform grid of points. The modified quadratic Shepard method has been adopted for data interpolation, enabling a simple interface with virtually any software output. Several numerical simulations that demonstrate the technique are presented. PMID:15181800

  3. Noise impact evaluation method for supermarket sites

    NASA Astrophysics Data System (ADS)

    Tocci, Gregory; Su, Rose Mary

    2001-05-01

    A large food supermarket chain is currently in a large store-building program involving several project managers who identify prospective sites, retain A/E design services, and obtain local permits for construction. There is a variety of environmental and planning issues that sometimes need special consideration; among these is environmental noise. Supermarket management has asked that consultants representing each discipline, and who work with project managers on specific projects as needed, issue guidelines as to when special consideration of their disciplines is required. This presentation describes a simple method developed for the supermarket owner that allows their project managers to judge whether there is a need for special consideration of environmental noise in store design by an acoustical consulting firm.

  4. Household batteries: Evaluation of collection methods

    SciTech Connect

    Seeberger, D.A.

    1992-12-31

    While it is difficult to prove that a specific material is causing contamination in a landfill, tests have been conducted at waste-to-energy facilities that indicate that household batteries contribute significant amounts of heavy metals to both air emissions and ash residue. Hennepin County, MN, used a dual approach for developing and implementing a special household battery collection. Alternative collection methods were examined; test collections were conducted. The second phase examined operating and disposal policy issues. This report describes the results of the grant project, moving from a broad examination of the construction and content of batteries, to a description of the pilot collection programs, and ending with a discussion of variables affecting the cost and operation of a comprehensive battery collection program. Three out-of-state companies (PA, NY) were found that accept spent batteries; difficulties in reclaiming household batteries are discussed.

  5. Household batteries: Evaluation of collection methods

    SciTech Connect

    Seeberger, D.A.

    1992-01-01

    While it is difficult to prove that a specific material is causing contamination in a landfill, tests have been conducted at waste-to-energy facilities that indicate that household batteries contribute significant amounts of heavy metals to both air emissions and ash residue. Hennepin County, MN, used a dual approach for developing and implementing a special household battery collection. Alternative collection methods were examined; test collections were conducted. The second phase examined operating and disposal policy issues. This report describes the results of the grant project, moving from a broad examination of the construction and content of batteries, to a description of the pilot collection programs, and ending with a discussion of variables affecting the cost and operation of a comprehensive battery collection program. Three out-of-state companies (PA, NY) were found that accept spent batteries; difficulties in reclaiming household batteries are discussed.

  6. Generic method for aero-optic evaluations.

    PubMed

    Frumker, Eugene; Pade, Offer

    2004-06-01

    The effect of aerodynamic flow on the performance of an airborne optical system is becoming a critical issue in the development of electro-optic systems. A novel technique for aero-optic calculations that is based on commercially available software is presented. The optically relevant data from the computational fluid dynamics results are transformed into an index-of-refraction field and introduced as an input to the optical code. The data do not necessarily have to be presented in analytical form; instead it is introduced in a most general form as a discrete set of values located at a nonuniform grid of points. The modified quadratic Shepard method has been adopted for data interpolation, enabling a simple interface with virtually any software output. Several numerical simulations that demonstrate the technique are presented.

  7. Explosive materials equivalency, test methods and evaluation

    NASA Technical Reports Server (NTRS)

    Koger, D. M.; Mcintyre, F. L.

    1980-01-01

    Attention is given to concepts of explosive equivalency of energetic materials based on specific airblast parameters. A description is provided of a wide bandwidth high accuracy instrumentation system which has been used extensively in obtaining pressure time profiles of energetic materials. The object of the considered test method is to determine the maximum output from the detonation of explosive materials in terms of airblast overpressure and positive impulse. The measured pressure and impulse values are compared with known characteristics of hemispherical TNT data to determine the equivalency of the test material in relation to TNT. An investigation shows that meaningful comparisons between various explosives and a standard reference material such as TNT should be based upon the same parameters. The tests should be conducted under the same conditions.

  8. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams. PMID:27505659

  9. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams.

  10. Systems and methods for circuit lifetime evaluation

    NASA Technical Reports Server (NTRS)

    Heaps, Timothy L. (Inventor); Sheldon, Douglas J. (Inventor); Bowerman, Paul N. (Inventor); Everline, Chester J. (Inventor); Shalom, Eddy (Inventor); Rasmussen, Robert D. (Inventor)

    2013-01-01

    Systems and methods for estimating the lifetime of an electrical system in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes iteratively performing Worst Case Analysis (WCA) on a system design with respect to different system lifetimes using a computer to determine the lifetime at which the worst case performance of the system indicates the system will pass with zero margin or fail within a predetermined margin for error given the environment experienced by the system during its lifetime. In addition, performing WCA on a system with respect to a specific system lifetime includes identifying subcircuits within the system, performing Extreme Value Analysis (EVA) with respect to each subcircuit to determine whether the subcircuit fails EVA for the specific system lifetime, when the subcircuit passes EVA, determining that the subcircuit does not fail WCA for the specified system lifetime, when a subcircuit fails EVA performing at least one additional WCA process that provides a tighter bound on the WCA than EVA to determine whether the subcircuit fails WCA for the specified system lifetime, determining that the system passes WCA with respect to the specific system lifetime when all subcircuits pass WCA, and determining that the system fails WCA when at least one subcircuit fails WCA.

  11. High-resolution DEM Effects on Geophysical Flow Models

    NASA Astrophysics Data System (ADS)

    Williams, M. R.; Bursik, M. I.; Stefanescu, R. E. R.; Patra, A. K.

    2014-12-01

    Geophysical mass flow models are numerical models that approximate pyroclastic flow events and can be used to assess the volcanic hazards certain areas may face. One such model, TITAN2D, approximates granular-flow physics based on a depth-averaged analytical model using inputs of basal and internal friction, material volume at a coordinate point, and a GIS in the form of a digital elevation model (DEM). The volume of modeled material propagates over the DEM in a way that is governed by the slope and curvature of the DEM surface and the basal and internal friction angles. Results from TITAN2D are highly dependent upon the inputs to the model. Here we focus on a single input: the DEM, which can vary in resolution. High resolution DEMs are advantageous in that they contain more surface details than lower-resolution models, presumably allowing modeled flows to propagate in a way more true to the real surface. However, very high resolution DEMs can create undesirable artifacts in the slope and curvature that corrupt flow calculations. With high-resolution DEMs becoming more widely available and preferable for use, determining the point at which high resolution data is less advantageous compared to lower resolution data becomes important. We find that in cases of high resolution, integer-valued DEMs, very high-resolution is detrimental to good model outputs when moderate-to-low (<10-15°) slope angles are involved. At these slope angles, multiple adjacent DEM cell elevation values are equal due to the need for the DEM to approximate the low slope with a limited set of integer values for elevation. The first derivative of the elevation surface thus becomes zero. In these cases, flow propagation is inhibited by these spurious zero-slope conditions. Here we present evidence for this "terracing effect" from 1) a mathematically defined simulated elevation model, to demonstrate the terracing effects of integer valued data, and 2) a real-world DEM where terracing must be

  12. Using Developmental Evaluation Methods with Communities of Practice

    ERIC Educational Resources Information Center

    van Winkelen, Christine

    2016-01-01

    Purpose: This paper aims to explore the use of developmental evaluation methods with community of practice programmes experiencing change or transition to better understand how to target support resources. Design/methodology/approach: The practical use of a number of developmental evaluation methods was explored in three organizations over a…

  13. EPA METHODS FOR EVALUATING WETLAND CONDITION, WETLANDS CLASSIFICATION

    EPA Science Inventory

    In 1999, the U.S. Environmental Protection Agency (EPA) began work on this series of reports entitled Methods for Evaluating Wetland Condition. The purpose of these reports is to help States and Tribes develop methods to evaluate 1) the overall ecological condition of wetlands us...

  14. Prevalence of Evaluation Method Courses in Education Leader Doctoral Preparation

    ERIC Educational Resources Information Center

    Shepperson, Tara L.

    2013-01-01

    This exploratory study investigated the prevalence of single evaluation methods courses in doctoral education leadership programs. Analysis of websites of 132 leading U.S. university programs found 62 evaluation methods courses in 54 programs. Content analysis of 49 course catalog descriptions resulted in five categories: survey, planning and…

  15. Direct toxicity assessment - Methods, evaluation, interpretation.

    PubMed

    Gruiz, Katalin; Fekete-Kertész, Ildikó; Kunglné-Nagy, Zsuzsanna; Hajdu, Csilla; Feigl, Viktória; Vaszita, Emese; Molnár, Mónika

    2016-09-01

    Direct toxicity assessment (DTA) results provide the scale of the actual adverse effect of contaminated environmental samples. DTA results are used in environmental risk management of contaminated water, soil and waste, without explicitly translating the results into chemical concentration. The end points are the same as in environmental toxicology in general, i.e. inhibition rate, decrease in the growth rate or in yield and the 'no effect' or the 'lowest effect' measurement points of the sample dilution-response curve. The measurement unit cannot be a concentration, since the contaminants and their content in the sample is unknown. Thus toxicity is expressed as the sample proportion causing a certain scale of inhibition or no inhibition. Another option for characterizing the scale of toxicity of an environmental sample is equivalencing. Toxicity equivalencing represents an interpretation tool which enables toxicity of unknown mixtures of chemicals be converted into the concentration of an equivalently toxic reference substance. Toxicity equivalencing, (i.e. expressing the toxicity of unknown contaminants as the concentration of the reference) makes DTA results better understandable for non-ecotoxicologists and other professionals educated and thinking based on the chemical model. This paper describes and discusses the role, the principles, the methodology and the interpretation of direct toxicity assessment (DTA) with the aim to contribute to the understanding of the necessity to integrate DTA results into environmental management of contaminated soil and water. The paper also introduces the benefits of the toxicity equivalency method. The use of DTA is illustrated through two case studies. The first case study focuses on DTA of treated wastewater with the aim to characterize the treatment efficacy of a biological wastewater treatment plant by frequent bioassaying. The second case study applied DTA to investigate the cover layers of two bauxite residue (red mud

  16. Direct toxicity assessment - Methods, evaluation, interpretation.

    PubMed

    Gruiz, Katalin; Fekete-Kertész, Ildikó; Kunglné-Nagy, Zsuzsanna; Hajdu, Csilla; Feigl, Viktória; Vaszita, Emese; Molnár, Mónika

    2016-09-01

    Direct toxicity assessment (DTA) results provide the scale of the actual adverse effect of contaminated environmental samples. DTA results are used in environmental risk management of contaminated water, soil and waste, without explicitly translating the results into chemical concentration. The end points are the same as in environmental toxicology in general, i.e. inhibition rate, decrease in the growth rate or in yield and the 'no effect' or the 'lowest effect' measurement points of the sample dilution-response curve. The measurement unit cannot be a concentration, since the contaminants and their content in the sample is unknown. Thus toxicity is expressed as the sample proportion causing a certain scale of inhibition or no inhibition. Another option for characterizing the scale of toxicity of an environmental sample is equivalencing. Toxicity equivalencing represents an interpretation tool which enables toxicity of unknown mixtures of chemicals be converted into the concentration of an equivalently toxic reference substance. Toxicity equivalencing, (i.e. expressing the toxicity of unknown contaminants as the concentration of the reference) makes DTA results better understandable for non-ecotoxicologists and other professionals educated and thinking based on the chemical model. This paper describes and discusses the role, the principles, the methodology and the interpretation of direct toxicity assessment (DTA) with the aim to contribute to the understanding of the necessity to integrate DTA results into environmental management of contaminated soil and water. The paper also introduces the benefits of the toxicity equivalency method. The use of DTA is illustrated through two case studies. The first case study focuses on DTA of treated wastewater with the aim to characterize the treatment efficacy of a biological wastewater treatment plant by frequent bioassaying. The second case study applied DTA to investigate the cover layers of two bauxite residue (red mud

  17. Turbulence mitigation methods and their evaluation

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; Dijk, Judith; Schutte, Klamer

    2014-10-01

    In general, long range detection, recognition and identification in visual and infrared imagery are hampered by turbulence caused by atmospheric conditions. The amount of turbulence is often indicated by the refractive-index structure parameter Cn2. The value of this parameter and its variation is determined by the turbulence effects over the optical path. Especially along horizontal optical paths near the surface (land-to-land scenario) large values and fluctuations of Cn2 occur, resulting in an extremely blurred and shaky image sequence. Another important parameter is the isoplanatic angle, θ0, which is the angle where the turbulence is approximately constant. Over long horizontal paths the values of θ0 are typically very small; much smaller than the field-of-view of the camera. Typical image artefacts that are caused by turbulence are blur, tilt and scintillation. These artefacts occur often locally in an image. Therefore turbulence corrections are required in each image patch of the size of the isoplanatic angle. Much research has been devoted to the field of turbulence mitigation. One of the main advantages of turbulence mitigation is that it enables visual recognition over larger distances by reducing the blur and motion in imagery. In many (military) scenarios this is of crucial importance. In this paper we give a brief overview of two software approaches to mitigate the visual artifacts caused by turbulence. These approaches are very diverse in complexity. It is shown that a more complex turbulence mitigation approach is needed to improve the imagery containing medium turbulence. The basic turbulence mitigation method is only capable of mitigating low turbulence.

  18. Conceptual evaluation of population health surveillance programs: method and example.

    PubMed

    El Allaki, Farouk; Bigras-Poulin, Michel; Ravel, André

    2013-03-01

    Veterinary and public health surveillance programs can be evaluated to assess and improve the planning, implementation and effectiveness of these programs. Guidelines, protocols and methods have been developed for such evaluation. In general, they focus on a limited set of attributes (e.g., sensitivity and simplicity), that are assessed quantitatively whenever possible, otherwise qualitatively. Despite efforts at standardization, replication by different evaluators is difficult, making evaluation outcomes open to interpretation. This ultimately limits the usefulness of surveillance evaluations. At the same time, the growing demand to prove freedom from disease or pathogen, and the Sanitary and Phytosanitary Agreement and the International Health Regulations require stronger surveillance programs. We developed a method for evaluating veterinary and public health surveillance programs that is detailed, structured, transparent and based on surveillance concepts that are part of all types of surveillance programs. The proposed conceptual evaluation method comprises four steps: (1) text analysis, (2) extraction of the surveillance conceptual model, (3) comparison of the extracted surveillance conceptual model to a theoretical standard, and (4) validation interview with a surveillance program designer. This conceptual evaluation method was applied in 2005 to C-EnterNet, a new Canadian zoonotic disease surveillance program that encompasses laboratory based surveillance of enteric diseases in humans and active surveillance of the pathogens in food, water, and livestock. The theoretical standard used for evaluating C-EnterNet was a relevant existing structure called the "Population Health Surveillance Theory". Five out of 152 surveillance concepts were absent in the design of C-EnterNet. However, all of the surveillance concept relationships found in C-EnterNet were valid. The proposed method can be used to improve the design and documentation of surveillance programs. It

  19. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  20. Development of high-resolution coastal DEMs: Seamlessly integrating bathymetric and topographic data to support coastal inundation modeling

    NASA Astrophysics Data System (ADS)

    Eakins, B. W.; Taylor, L. A.; Warnken, R. R.; Carignan, K. S.; Sharman, G. F.

    2006-12-01

    The National Geophysical Data Center (NGDC), an office of the National Oceanic and Atmospheric Administration (NOAA), is cooperating with the NOAA Pacific Marine Environmental Laboratory (PMEL), Center for Tsunami Research to develop high-resolution digital elevation models (DEMs) of combined bathymetry and topography. The coastal DEMs will be used as input for the Method of Splitting Tsunami (MOST) model developed by PMEL to simulate tsunami generation, propagation and inundation. The DEMs will also be useful in studies of coastal inundation caused by hurricane storm surge and rainfall flooding, resulting in valuable information for local planners involved in disaster preparedness. We present our methodology for creating the high-resolution coastal DEMs, typically at 1/3 arc-second (10 meters) cell size, from diverse digital datasets collected by numerous methods, in different terrestrial environments, and at various scales and resolutions; one important step is establishing the relationships between various tidal and geodetic vertical datums, which may vary over a gridding region. We also discuss problems encountered and lessons learned, using the Myrtle Beach, South Carolina DEM as an example.

  1. GEOEYE-1 Satellite Stereo-Pair DEM Extraction Using Scale-Invariant Feature Transform on a Parallel Processing Platform

    NASA Astrophysics Data System (ADS)

    Daliakopoulos, Ioannis; Tsanis, Ioannis

    2013-04-01

    A module for Digital Elevation Model (DEM) extraction from Very High Resolution (VHR) satellite stereo-pair imagery was developed. A procedure for parallel processing of cascading image tiles is used for handling the large datasets requirements of VHR satellite imagery. The Scale-Invariant Feature Transform (SIFT) algorithm is used to detect potentially homogeneous features in the members of the stereo-pair. The resulting feature pairs are filtered using the RANdom SAmple Consensus (RANSAC) algorithm by using a variable distance threshold. Finally, homogeneous pairs are converted to point cloud ground coordinates for DEM generation. The module is tested with a 0.5mx0.5m Geoeye-1 stereo-pair acquired over an area of 25sqkm in the island of Crete, Greece. A sensitivity analysis is performed to determine the optimum module parameterization. The criteria of average point spacing irregularity is introduced to evaluate the quality and assess the effective resolution of the produced DEMs. The resulting 1.5mx1.5m DEM has superior detail over the 2m and 5m DEMs used as reference and yields a Root Mean Square Error (RMSE) of about 1m compared to ground truth measurements.

  2. Evaluating Methods for Evaluating Instruction: The Case of Higher Education. NBER Working Paper No. 12844

    ERIC Educational Resources Information Center

    Weinberg, Bruce A.; Fleisher, Belton M.; Hashimoto, Masanori

    2007-01-01

    This paper studies methods for evaluating instruction in higher education. We explore student evaluations of instruction and a variety of alternatives. We develop a simple model to illustrate the biases inherent in student evaluations. Measuring learning using grades in future courses, we show that student evaluations are positively related to…

  3. A hybrid FEM-DEM approach to the simulation of fluid flow laden with many particles

    NASA Astrophysics Data System (ADS)

    Casagrande, Marcus V. S.; Alves, José L. D.; Silva, Carlos E.; Alves, Fábio T.; Elias, Renato N.; Coutinho, Alvaro L. G. A.

    2016-01-01

    In this work we address a contribution to the study of particle laden fluid flows in scales smaller than TFM (two-fluid models). The hybrid model is based on a Lagrangian-Eulerian approach. A Lagrangian description is used for the particle system employing the discrete element method (DEM), while a fixed Eulerian mesh is used for the fluid phase modeled by the finite element method (FEM). The resulting coupled DEM-FEM model is integrated in time with a subcycling scheme. The aforementioned scheme is applied in the simulation of a seabed current to analyze which mechanisms lead to the emergence of bedload transport and sediment suspension, and also quantify the effective viscosity of the seabed in comparison with the ideal no-slip wall condition. A simulation of a salt plume falling in a fluid column is performed, comparing the main characteristics of the system with an experiment.

  4. The TanDEM-X Digital Elevation Model and the Terrestrial Impact Crater Record

    NASA Astrophysics Data System (ADS)

    Gottwald, M.; Fritz, T.; Breit, H.; Schättler, B.; Harris, A.

    2016-08-01

    The German TanDEM-X mission provides a new high-quality DEM of the Earth's surface. We present the current status of this DEM, show how it compares with existing DEMs from spaceborne sensors and illustrate results of our global mapping project.

  5. An interdisciplinary heuristic evaluation method for universal building design.

    PubMed

    Afacan, Yasemin; Erbug, Cigdem

    2009-07-01

    This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.

  6. A new objective method of evaluating image sharpness

    NASA Astrophysics Data System (ADS)

    Isono, H.

    1984-02-01

    Our daily lives are filled with a variety of images, including those produced by computer tomography, and supersonic images, as well as photographs and TV pictures, and the image processing technology of computers has progressed greatly. Although the final estimation of image quality must be made as a subjective evaluation by a human observer, the objective evaluation of image quality is extremely important to image technology, so that image transmission systems are designed rationally and image quality improved. Image sharpness is particularly important as a psycho-physical factor affecting the image quality of photographs and TV pictures. Many attempts were made to represent image sharpness using the physical parameters of image transmission systems, and a variety of evaluation methods were proposed for image sharpness. However, conventional sharpness evaluation methods may not fully apply to the evaluation of image sharpness for TV displays new evaluation method incorporating improvements to the calculation of TV image sharpness is proposed.

  7. SPH-DEM simulations of grain dispersion by liquid injection

    NASA Astrophysics Data System (ADS)

    Robinson, Martin; Luding, Stefan; Ramaioli, Marco

    2013-06-01

    We study the dispersion of an initially packed, static granular bed by the injection of a liquid jet. This is a relevant system for many industrial applications, including paint dispersion or food powder dissolution. Both decompaction and dispersion of the powder are not fully understood, leading to inefficiencies in these processes. Here we consider a model problem where the liquid jet is injected below a granular bed contained in a cylindrical cell. Two different initial conditions are considered: a two-phase case where the bed is initially fully immersed in the liquid and a three-phase case where the bed and cell are completely dry preceding the injection of the liquid. The focus of this contribution is the simulation of these model problems using a two-way coupled SPH-DEM granularliquid method [M. Robinson, M. Ramaioli, and S. Luding, submitted (2013) and http://arxiv.org/abs/1301.0752 (2013)]. This is a purely particle-based method without any prescribed mesh, well suited for this and other problems involving a free (liquidgas) surface and a partly immersed particle phase. Our simulations show the effect of process parameters such as injection flow rate and injection diameter on the dispersion pattern, namely whether the granular bed is impregnated bottom-up or a jet is formed and compare well with experiments.

  8. Disentangling conformational states of macromolecules in 3D-EM through likelihood optimization.

    PubMed

    Scheres, Sjors H W; Gao, Haixiao; Valle, Mikel; Herman, Gabor T; Eggermont, Paul P B; Frank, Joachim; Carazo, Jose-Maria

    2007-01-01

    Although three-dimensional electron microscopy (3D-EM) permits structural characterization of macromolecular assemblies in distinct functional states, the inability to classify projections from structurally heterogeneous samples has severely limited its application. We present a maximum likelihood-based classification method that does not depend on prior knowledge about the structural variability, and demonstrate its effectiveness for two macromolecular assemblies with different types of conformational variability: the Escherichia coli ribosome and Simian virus 40 (SV40) large T-antigen.

  9. Development of an unresolved CFD-DEM model for the flow of viscous suspensions and its application to solid-liquid mixing

    NASA Astrophysics Data System (ADS)

    Blais, Bruno; Lassaigne, Manon; Goniva, Christoph; Fradette, Louis; Bertrand, François

    2016-08-01

    Although viscous solid-liquid mixing plays a key role in the industry, the vast majority of the literature on the mixing of suspensions is centered around the turbulent regime of operation. However, the laminar and transitional regimes face considerable challenges. In particular, it is important to know the minimum impeller speed (Njs) that guarantees the suspension of all particles. In addition, local information on the flow patterns is necessary to evaluate the quality of mixing and identify the presence of dead zones. Multiphase computational fluid dynamics (CFD) is a powerful tool that can be used to gain insight into local and macroscopic properties of mixing processes. Among the variety of numerical models available in the literature, which are reviewed in this work, unresolved CFD-DEM, which combines CFD for the fluid phase with the discrete element method (DEM) for the solid particles, is an interesting approach due to its accurate prediction of the granular dynamics and its capability to simulate large amounts of particles. In this work, the unresolved CFD-DEM method is extended to viscous solid-liquid flows. Different solid-liquid momentum coupling strategies, along with their stability criteria, are investigated and their accuracies are compared. Furthermore, it is shown that an additional sub-grid viscosity model is necessary to ensure the correct rheology of the suspensions. The proposed model is used to study solid-liquid mixing in a stirred tank equipped with a pitched blade turbine. It is validated qualitatively by comparing the particle distribution against experimental observations, and quantitatively by compairing the fraction of suspended solids with results obtained via the pressure gauge technique.

  10. A method for evaluating areas for national park status

    NASA Astrophysics Data System (ADS)

    Gülez, Sümer

    1992-11-01

    A procedure for evaluating different areas as national parks based on a scoring system is proposed. A National Park Evaluation Form (NPEF) evaluating natural, cultural, and recreational resources in accordance with international criteria for national parks is presented. The evaluation points given to an area indicate the possibility of the area becoming a national park. In this method, subjectivity and bias have been minimized by a special application of the Delphi technique. The method outlined here could help in the efforts of selecting and establishing national parks in many countries.

  11. Handbook of test methods for evaluating chemical deicers

    SciTech Connect

    Chappelow, C.C.; McElroy, A.D.; Blackburn, R.R.; Darwin, D.; de Noyelles, F.G.

    1992-11-01

    The handbook contains a structured selection of specific test methods for complete characterization of deicing chemicals. Sixty-two specific test methods are defined for the evaluation of chemical deicers in eight principal property performance areas: (1) physicochemical characteristics; (2) deicing performance; (3) compatibility with bare and coated metals; (4) compatibility with metals in concrete; (5) compatibility with concrete and nonmetals; (6) engineering parameters; (7) ecological effects; and (8) health and safety aspects. The 62 specific chemical deicer test methods are composed of 12 primary and 50 supplementary test methods. The primary test methods, which were developed for conducting the more important evaluations, are identified in the report.

  12. Finding the service you need: human centered design of a Digital Interactive Social Chart in DEMentia care (DEM-DISC).

    PubMed

    van der Roest, H G; Meiland, F J M; Haaker, T; Reitsma, E; Wils, H; Jonker, C; Dröes, R M

    2008-01-01

    Community dwelling people with dementia and their informal carers experience a lot of problems. In the course of the disease process people with dementia become more dependent on others and professional help is often necessary. Many informal carers and people with dementia experience unmet needs with regard to information on the disease and on the available care and welfare offer, therefore they tend not to utilize the broad spectrum of available care and welfare services. This can have very negative consequences like unsafe situations, social isolation of the person with dementia and overburden of informal carers with consequent increased risk of illness for them. The development of a DEMentia specific Digital Interactive Social Chart (DEM-DISC) may counteract these problems. DEM-DISC is a demand oriented website for people with dementia and their carers, which is easy, accessible and provides users with customized information on healthcare and welfare services. DEM-DISC is developed according to the human centered design principles, this means that people with dementia, informal carers and healthcare professionals were involved throughout the development process. This paper describes the development of DEM-DISC from four perspectives, a domain specific content perspective, an ICT perspective, a user perspective and an organizational perspective. The aims and most important results from each perspective will be discussed. It is concluded that the human centered design was a valuable method for the development of the DEM-DISC.

  13. A Preliminary Rubric Design to Evaluate Mixed Methods Research

    ERIC Educational Resources Information Center

    Burrows, Timothy J.

    2013-01-01

    With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…

  14. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  15. Entrepreneur environment management behavior evaluation method derived from environmental economy.

    PubMed

    Zhang, Lili; Hou, Xilin; Xi, Fengru

    2013-12-01

    Evaluation system can encourage and guide entrepreneurs, and impel them to perform well in environment management. An evaluation method based on advantage structure is established. It is used to analyze entrepreneur environment management behavior in China. Entrepreneur environment management behavior evaluation index system is constructed based on empirical research. Evaluation method of entrepreneurs is put forward, from the point of objective programming-theory to alert entrepreneurs concerned to think much of it, which means to take minimized objective function as comprehensive evaluation result and identify disadvantage structure pattern. Application research shows that overall behavior of Chinese entrepreneurs environmental management are good, specially, environment strategic behavior are best, environmental management behavior are second, cultural behavior ranks last. Application results show the efficiency and feasibility of this method. PMID:25078816

  16. Entrepreneur environment management behavior evaluation method derived from environmental economy.

    PubMed

    Zhang, Lili; Hou, Xilin; Xi, Fengru

    2013-12-01

    Evaluation system can encourage and guide entrepreneurs, and impel them to perform well in environment management. An evaluation method based on advantage structure is established. It is used to analyze entrepreneur environment management behavior in China. Entrepreneur environment management behavior evaluation index system is constructed based on empirical research. Evaluation method of entrepreneurs is put forward, from the point of objective programming-theory to alert entrepreneurs concerned to think much of it, which means to take minimized objective function as comprehensive evaluation result and identify disadvantage structure pattern. Application research shows that overall behavior of Chinese entrepreneurs environmental management are good, specially, environment strategic behavior are best, environmental management behavior are second, cultural behavior ranks last. Application results show the efficiency and feasibility of this method.

  17. Phase4: automatic evaluation of database search methods.

    PubMed

    Rehmsmeier, Marc

    2002-12-01

    It has become standard to evaluate newly devised database search methods in terms of sensitivity and selectivity and to compare them with existing methods. This involves the construction of a suitable evaluation scenario, the execution of the methods, the assessment of their performances, and the presentation of the results. Each of these four phases and their smooth connection usually imposes formidable work. To relieve the evaluator of this burden, a system has been designed with which evaluations can be effected rapidly. It is implemented in the programming language Python whose object-oriented features are used to offer a great flexibility in changing the evaluation design. A graphical user interface is provided which offers the usual amenities such as radio- and checkbuttons or file browsing facilities.

  18. A global vegetation corrected SRTM DEM for use in hazard modelling

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; O'Loughlin, F.; Neal, J. C.; Durand, M. T.; Alsdorf, D. E.; Paiva, R. C. D.

    2015-12-01

    We present the methodology and results from the development of a near-global 'bare-earth' Digital Elevation Model (DEM) derived from the Shuttle Radar Topography Mission (SRTM) data. Digital Elevation Models are the most important input for hazard modelling, as the DEM quality governs the accuracy of the model outputs. While SRTM is currently the best near-globally [60N to 60S] available DEM, it requires adjustments to reduce the vegetation contamination and make it useful for hazard modelling over heavily vegetated areas (e.g. tropical wetlands). Unlike previous methods of accounting for vegetation contamination, which concentrated on correcting relatively small areas and usually applied a static adjustment, we account for vegetation contamination globally and apply a spatial varying correction, based on information about canopy height and density. Our new 'Bare-Earth' SRTM DEM combines multiple remote sensing datasets, including ICESat GLA14 ground elevations, the vegetation continuous field dataset as a proxy for penetration depth of SRTM and a global vegetation height map, to remove the vegetation artefacts present in the original SRTM DEM. In creating the final 'bare-earth' SRTM DEM dataset, we produced three different 'bare-earth' SRTM products. The first applies global parameters, while the second and third products apply parameters that are regionalised based on either climatic zones or vegetation types, respectively. We also tested two different canopy density proxies of different spatial resolution. Using ground elevations obtained from the ICESat GLA14 satellite altimeter, we calculate the residual errors for the raw SRTM and the three 'bare-earth' SRTM products and compare performances. The three 'bare-earth' products all show large improvements over the raw SRTM in vegetated areas with the overall mean bias reduced by between 75 and 92% from 4.94 m to 0.40 m. The overall standard deviation is reduced by between 29 and 33 % from 7.12 m to 4.80 m. As

  19. Coupled DEM-CFD Investigation of Granular Transport in a Fluid Channel

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Dai, F.; Xu, N. W.

    2015-09-01

    This paper presents three dimensional numerical investigations of granular transport in fluids, analysed by the Discrete Element Method (DEM) coupled with Computational Fluid Mechanics (CFD). By employing this model, the relevance of flow velocity and granular depositional morphology has been clarified. The larger the flow velocity is, the further distance the grains can be transported to. In this process, the segregation of solid grains has been clearly identified. This research reveals that coarse grains normally accumulate near the grain source region, while the fine grains can be transported to the flow front. Regardless of the different flow velocities used in these simulations, the intensity of grains segregation remains almost unchanged. The results obtained from the DEM-CFD coupled simulations can reasonably explain the grain transport process occurred in natural environments, such as river scouring, evolution of river/ocean floor, deserts and submarine landslides.

  20. DEM Simulation of Particle Clogging in Fiber Filtration

    NASA Astrophysics Data System (ADS)

    Tao, Ran; Yang, Mengmeng; Li, Shuiqing

    2015-11-01

    The formation of porous particle deposits plays a crucial role in determining the efficiency of filtration process. In this work, an adhesive discrete element method (DEM), in combination with CFD, is developed to dynamically describe these porous deposit structures and the changed flow field between two parallel fibers under the periodic boundary conditions. For the first time, it is clarified that the structures of clogged particles are dependent on both the adhesion parameter (defined as the ratio of interparticle adhesion to particle inertia) and the Stokes number (as an index of impaction efficiency). The relationship between the pressure-drop gradient and the coordination number along the filtration time is explored, which can be used to quantitatively classify the different filtration regimes, i.e., clean filter stage, clogging stage and cake filtration stage. Finally, we investigate the influence of the fiber separation distance on the particle clogging behavior, which affects the collecting efficiency of the fibers significantly. The results suggest that changing the arrangement of fibers can improve the filter performance. This work has been funded by the National Key Basic Research and Development Program (2013CB228506).

  1. ASTM test methods for composite characterization and evaluation

    NASA Technical Reports Server (NTRS)

    Masters, John E.

    1994-01-01

    A discussion of the American Society for Testing and Materials is given. Under the topic of composite materials characterization and evaluation, general industry practice and test methods for textile composites are presented.

  2. EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD

    EPA Science Inventory

    Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...

  3. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Phase 1

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley Multidisciplinary Design Optimization (MDO) method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of reproducible experiments. This report documents all computational experiments conducted in Phase I of the study. This report is a companion to the paper titled Initial Results of an MDO Method Evaluation Study by N. M. Alexandrov and S. Kodiyalam (AIAA-98-4884).

  4. Laboratory milling method for whole grain soft wheat flour evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Whole grain wheat products are a growing portion of the foods marked in North America, yet few standard methods exist to evaluate whole grain wheat flour. This study evaluated two flour milling systems to produce whole grain soft wheat flour for a standard soft wheat product, a wire-cut cookie. A...

  5. Evaluation of methods of temperament scoring for beef cattle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Temperament can negatively affect various production traits, including live weight, ADG, DMI, conception rates and carcass weight. The objective of this research study was to evaluate temperament scoring methods in beef cattle. Crossbred (n = 228) calves were evaluated for temperament at weaning by ...

  6. Evaluating Multiple Prevention Programs: Methods, Results, and Lessons Learned

    ERIC Educational Resources Information Center

    Adler-Baeder, Francesca; Kerpelman, Jennifer; Griffin, Melody M.; Schramm, David G.

    2010-01-01

    Extension faculty and agents/educators are increasingly collaborating with local and state agencies to provide and evaluate multiple, distinct programs, yet there is limited information about measuring outcomes and combining results across similar program types. This article explicates the methods and outcomes of a state-level evaluation of…

  7. The Use of DEM to Capture the Dynamics of the Flow of Solid Pellets in a Single Screw Extruder

    NASA Astrophysics Data System (ADS)

    Hong, He; Covas, J. A.; Gaspar-Cunha, A.

    2007-05-01

    Despite of the numerical developments on the numerical modeling of polymer plasticating single screw extrusion, the initial stages of solids conveying are still treated unsatisfactorily, a simple plug flow condition being assumed. It is well known that this produces poor predictions of relevant process parameters, e.g., output. This work reports on attempt to model the process using the Discrete Element Method (DEM) with the aim of unveiling the dynamics of the process. Using DEM each pellet is taken as a separate unit, thus predictions of flow patterns, velocity fields and degree of filling are possible. We present the algorithm and a few preliminary results.

  8. A formative multi-method approach to evaluating training.

    PubMed

    Hayes, Holly; Scott, Victoria; Abraczinskas, Michelle; Scaccia, Jonathan; Stout, Soma; Wandersman, Abraham

    2016-10-01

    This article describes how we used a formative multi-method evaluation approach to gather real-time information about the processes of a complex, multi-day training with 24 community coalitions in the United States. The evaluation team used seven distinct, evaluation strategies to obtain evaluation data from the first Community Health Improvement Leadership Academy (CHILA) within a three-prong framework (inquiry, observation, and reflection). These methods included: comprehensive survey, rapid feedback form, learning wall, observational form, team debrief, social network analysis and critical moments reflection. The seven distinct methods allowed for both real time quality improvement during the CHILA and long term planning for the next CHILA. The methods also gave a comprehensive picture of the CHILA, which when synthesized allowed the evaluation team to assess the effectiveness of a training designed to tap into natural community strengths and accelerate health improvement. We hope that these formative evaluation methods can continue to be refined and used by others to evaluate training. PMID:27454882

  9. Development of an automatic evaluation method for patient positioning error.

    PubMed

    Kubota, Yoshiki; Tashiro, Mutsumi; Shinohara, Ayaka; Abe, Satoshi; Souda, Saki; Okada, Ryosuke; Ishii, Takayoshi; Kanai, Tatsuaki; Ohno, Tatsuya; Nakano, Takashi

    2015-07-08

    Highly accurate radiotherapy needs highly accurate patient positioning. At our facility, patient positioning is manually performed by radiology technicians. After the positioning, positioning error is measured by manually comparing some positions on a digital radiography image (DR) to the corresponding positions on a digitally reconstructed radiography image (DRR). This method is prone to error and can be time-consuming because of its manual nature. Therefore, we propose an automated measuring method for positioning error to improve patient throughput and achieve higher reliability. The error between a position on the DR and a position on the DRR was calculated to determine the best matched position using the block-matching method. The zero-mean normalized cross correlation was used as our evaluation function, and the Gaussian weight function was used to increase importance as the pixel position approached the isocenter. The accuracy of the calculation method was evaluated using pelvic phantom images, and the method's effectiveness was evaluated on images of prostate cancer patients before the positioning, comparing them with the results of radiology technicians' measurements. The root mean square error (RMSE) of the calculation method for the pelvic phantom was 0.23 ± 0.05 mm. The coefficients between the calculation method and the measurement results of the technicians were 0.989 for the phantom images and 0.980 for the patient images. The RMSE of the total evaluation results of positioning for prostate cancer patients using the calculation method was 0.32 ± 0.18 mm. Using the proposed method, we successfully measured residual positioning errors. The accuracy and effectiveness of the method was evaluated for pelvic phantom images and images of prostate cancer patients. In the future, positioning for cancer patients at other sites will be evaluated using the calculation method. Consequently, we expect an improvement in treatment throughput for these other sites.

  10. Aster Global dem Version 3, and New Aster Water Body Dataset

    NASA Astrophysics Data System (ADS)

    Abrams, M.

    2016-06-01

    In 2016, the US/Japan ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) project released Version 3 of the Global DEM (GDEM). This 30 m DEM covers the earth's surface from 82N to 82S, and improves on two earlier versions by correcting some artefacts and filling in areas of missing DEMs by the acquisition of additional data. The GDEM was produced by stereocorrelation of 2 million ASTER scenes and operation on a pixel-by-pixel basis: cloud screening; stacking data from overlapping scenes; removing outlier values, and averaging elevation values. As previously, the GDEM is packaged in ~ 23,000 1 x 1 degree tiles. Each tile has a DEM file, and a NUM file reporting the number of scenes used for each pixel, and identifying the source for fill-in data (where persistent clouds prevented computation of an elevation value). An additional data set was concurrently produced and released: the ASTER Water Body Dataset (AWBD). This is a 30 m raster product, which encodes every pixel as either lake, river, or ocean; thus providing a global inland and shore-line water body mask. Water was identified through spectral analysis algorithms and manual editing. This product was evaluated against the Shuttle Water Body Dataset (SWBD), and the Landsat-based Global Inland Water (GIW) product. The SWBD only covers the earth between about 60 degrees north and south, so it is not a global product. The GIW only delineates inland water bodies, and does not deal with ocean coastlines. All products are at 30 m postings.

  11. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  12. A KARAOKE System Singing Evaluation Method that More Closely Matches Human Evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo

    KARAOKE is a popular amusement for old and young. Many KARAOKE machines have singing evaluation function. However, it is often said that the scores given by KARAOKE machines do not match human evaluation. In this paper a KARAOKE scoring method strongly correlated with human evaluation is proposed. This paper proposes a way to evaluate songs based on the distance between singing pitch and musical scale, employing a vibrato extraction method based on template matching of spectrum. The results show that correlation coefficients between scores given by the proposed system and human evaluation are -0.76∼-0.89.

  13. Investigation into methods of nondestructive evaluation of masonry structures

    NASA Astrophysics Data System (ADS)

    Noland, J. L.; Atkinson, R. H.; Baur, J. C.

    1982-02-01

    Six nondestructive evaluation (NDE) test methods were investigated to assess their potential for strength and condition evaluation of masonry using unmodified commercially available equipment. The methods were: vibration, rebound hammer, penetration, ultrasonic pulse velocity, mechanical pulse velocity, and acoustic-mechanical pulse. These methods were applied to two wythe cantilever wall specimens. Companion small scale specimens, specimens removed from the walls subsequent to the NDE test, and in the wall specimens were tested to destruction to provide compression, shear, and flexural strength data for correlation studies. Results indicated that strength properties of the masonry tested could be estimated generally by some of the NDE methods considered. Investigation of the acoustic-mechanical pulse method indicated that consistent measurements could be obtained and that flaws could be detected. Nondestructive methods offer a means of relative quality assessment and flaw detection, and that some modifications to equipment would enhance efficacy of the methods.

  14. Revisiting heuristic evaluation methods to improve the reliability of findings.

    PubMed

    Georgsson, Mattias; Weir, Charlene R; Staggers, Nancy

    2014-01-01

    The heuristic evaluation (HE) method is one of the most common in the suite of tools for usability evaluations because it is a fast, inexpensive and resource-efficient process in relation to the many usability issues it generates. The method emphasizes completely independent initial expert evaluations. Inter-rater reliability and agreement coefficients are not calculated. The variability across evaluators, even dual domain experts, can be significant as is seen in the case study here. The implications of this wide variability mean that results are unique to each HE, results are not readily reproducible and HE research on usability is not yet creating a uniform body of knowledge. We offer recommendations to improve the science by incorporating selected techniques from qualitative research: calculating inter-rater reliability and agreement scores, creating a codebook to define concepts/categories and offering crucial information about raters' backgrounds, agreement techniques and the evaluation setting.

  15. Best Estimate Method vs Evaluation Method: a comparison of two techniques in evaluating seismic analysis and design

    SciTech Connect

    Bumpus, S.E.; Johnson, J.J.; Smith, P.D.

    1980-05-01

    The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the traditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC) - seismic input, soil-structure interaction, major structural response, and subsystem response - are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on a model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evaluation Method is also demonstrated.

  16. Comparison of different GCM evaluation methods for NYC watersheds

    NASA Astrophysics Data System (ADS)

    Anandhi, A.; Frei, A.; Pierson, D. C.; Markensten, H.

    2009-12-01

    To study the potential impacts of climate change on the quantity and quality of water in the New York City (NYC) water supply, a suite of watershed and reservoir models are required. The evaluation of Global circulation models (GCMs) to provide input data for this suite of models becomes important and valuable, as the number of watershed and reservoir model runs for impact studies increases exponentially with every GCM selected. Our objective is to investigate different methods of GCM evaluation. In this study GCM performance refers to how well a GCM simulates the observed climate record. Using a variety of evaluation methods, we compare observed meteorological variables which are required as input to our models with contemporaneous values of variables from GCM simulations. We then investigate different criteria for choosing appropriate evaluation methods.

  17. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  18. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  19. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  20. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  1. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  2. 10 CFR 963.13 - Preclosure suitability evaluation method.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Preclosure suitability evaluation method. 963.13 Section 963.13 Energy DEPARTMENT OF ENERGY YUCCA MOUNTAIN SITE SUITABILITY GUIDELINES Site Suitability... geologic repository at the Yucca Mountain site using the method described in paragraph (b) of this...

  3. Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses

    ERIC Educational Resources Information Center

    Christ, Thomas W.

    2009-01-01

    Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…

  4. Methods for the evaluation of alternative disaster warning systems

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.; Anderson, R. J., Jr.; Lanen, W. N.

    1977-01-01

    For each of the methods identified, a theoretical basis is provided and an illustrative example is described. The example includes sufficient realism and detail to enable an analyst to conduct an evaluation of other systems. The methods discussed in the study include equal capability cost analysis, consumers' surplus, and statistical decision theory.

  5. What Can Mixed Methods Designs Offer Professional Development Program Evaluators?

    ERIC Educational Resources Information Center

    Giordano, Victoria; Nevin, Ann

    2007-01-01

    In this paper, the authors describe the benefits and pitfalls of mixed methods designs. They argue that mixed methods designs may be preferred when evaluating professional development programs for p-K-12 education given the new call for accountability in making data-driven decisions. They summarize and critique the studies in terms of limitations…

  6. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  7. Objective evaluation method of steering comfort based on movement quality evaluation of driver steering maneuver

    NASA Astrophysics Data System (ADS)

    Yang, Yiyong; Liu, Yahui; Wang, Man; Ji, Run; Ji, Xuewu

    2014-09-01

    The existing research of steering comfort mainly focuses on the subjective evaluation, aiming at designing and optimizing the steering system. In the development of steering system, especially the evaluation of steering comfort, the objective evaluation methods considered the kinematic characteristics of driver steering maneuver are not proposed, which means that the objective evaluation of steering cannot be conducted with the evaluation of kinematic characteristics of driver in steering maneuver. In order to propose the objective evaluation methods of steering comfort, the evaluation of steering movement quality of driver is developed on the basis of the study of the kinematic characteristics of steering maneuver. First, the steering motion trajectories of the driver in both comfortable and certain extreme uncomfortable operation conditions are detected using the Vicon motion capture system. The operation conditions are under the restrictions of the vertical height and horizontal distance between steering wheel center and the H-point of driver, and the steering resisting torque else. Next, the movement quality evaluation of driver steering maneuver is assessed using twelve kinds of evaluation indices based on the kinematic analyses of the steering motion trajectories to propose an objective evaluation method. Finally, an integrated discomfort index of steering maneuver is proposed on the basis of the regression analysis of subjective evaluation rating and the movement quality evaluation indices, including the Jerk, Discomfort and Joint Torque indices. The test results show that the proposed integrated discomfort index gives a good fitting with the subjective evaluation of discomfort, which means it can be used to evaluate or predict the discomfort level of steering maneuver. This paper proposes an objective evaluation method of steering comfort based on the movement quality evaluation of driver steering maneuver.

  8. 2D DEM model of sand transport with wind interaction

    NASA Astrophysics Data System (ADS)

    Oger, L.; Valance, A.

    2013-06-01

    The advance of the dunes in the desert is a threat to the life of the local people. The dunes invade houses, agricultural land and perturb the circulation on the roads. It is therefore very important to understand the mechanism of sand transport in order to fight against desertification. Saltation in which sand grains are propelled by the wind along the surface in short hops, is the primary mode of blown sand movement [1]. The saltating grains are very energetic and when impact a sand surface, they rebound and consequently eject other particles from the sand bed. The ejected grains, called reptating grains, contribute to the augmentation of the sand flux. Some of them can be promoted to the saltation motion. We use a mechanical model based on the Discrete Element Method to study successive collisions of incident energetic beads with granular packing in the context of Aeolian saltation transport. We investigate the collision process for the case where the incident bead and those from the packing have identical mechanical properties. We analyze the features of the consecutive collision processes made by the transport of the saltating disks by a wind in which its profile is obtained from the counter-interaction between air flow and grain flows. We used a molecular dynamics method known as DEM (soft Discrete Element Method) with a initial static packing of 20000 2D particles. The dilation of the upper surface due to the consecutive collisions is responsible for maintaining the flow at a given energy input due to the wind.

  9. Study on evaluation methods for Rayleigh wave dispersion characteristic

    USGS Publications Warehouse

    Shi, L.; Tao, X.; Kayen, R.; Shi, H.; Yan, S.

    2005-01-01

    The evaluation of Rayleigh wave dispersion characteristic is the key step for detecting S-wave velocity structure. By comparing the dispersion curves directly with the spectra analysis of surface waves (SASW) method, rather than comparing the S-wave velocity structure, the validity and precision of microtremor-array method (MAM) can be evaluated more objectively. The results from the China - US joint surface wave investigation in 26 sites in Tangshan, China, show that the MAM has the same precision with SASW method in 83% of the 26 sites. The MAM is valid for Rayleigh wave dispersion characteristic testing and has great application potentiality for site S-wave velocity structure detection.

  10. Contemporary ice-elevation changes on central Chilean glaciers using SRTM1 and high-resolution DEMs

    NASA Astrophysics Data System (ADS)

    Vivero, Sebastian; MacDonell, Shelley

    2016-04-01

    Glaciers located in central Chile have undergone significant retreat in recent decades. Whilst studies have evaluated area loss of several glaciers, there are no detailed studies of volume losses. This lack of information restricts not only estimations of current and future contributions to sea level rise, but also has limited the evaluation of freshwater resource availability in the region. Recently, the Chilean Water Directorate has supported the collection of field and remotely sensed data in the region which has enabled glacier changes to be evaluated in greater detail. This study aims to compare high-resolution laser scanning DEMs acquired by the Chilean Water Directorate in April 2015 with the recently released SRTM 1 arc-second DEM (˜30 m) acquired in February 2000 to calculate geodetic mass balance changes for three glaciers in a catchment in central Chile over a 15-year period. Detailed analysis of the SRTM and laser scanning DEMs, together with the glacier outlines enable the quantification of elevation and volume changes. Glacier outlines from February 2000 were obtained using the multispectral analysis of a Landsat TM image, whereas outlines from April 2015 were digitised from high resolution glacier orthophotomosaics. Additionally, we accounted for radar penetration into snow and/or ice by evaluating elevation differences between SRTM C-and X-bands, as well as mis-registration between SRTM DEM and the high-resolution DEMs. Over the period all glaciers show similar ice wastage in the order of 0.03 km3 for the debris-covered and non-covered glaciers. However, whilst on the non-covered glaciers mass loss is largely related to elevation and the addition of surface sediment, on the debris-covered glacier, losses are related to the development of thermokarst features. By analysing the DEM in conjunction with Landsat images, we have detected changes in the sediment cover of the non-covered glaciers, which is likely to change the behaviour of the surface mass

  11. DOE methods for evaluating environmental and waste management samples.

    SciTech Connect

    Goheen, S C; McCulloch, M; Thomas, B L; Riley, R G; Sklarew, D S; Mong, G M; Fadeff, S K

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others.

  12. System and method for evaluating a wire conductor

    DOEpatents

    Panozzo, Edward; Parish, Harold

    2013-10-22

    A method of evaluating an electrically conductive wire segment having an insulated intermediate portion and non-insulated ends includes passing the insulated portion of the wire segment through an electrically conductive brush. According to the method, an electrical potential is established on the brush by a power source. The method also includes determining a value of electrical current that is conducted through the wire segment by the brush when the potential is established on the brush. The method additionally includes comparing the value of electrical current conducted through the wire segment with a predetermined current value to thereby evaluate the wire segment. A system for evaluating an electrically conductive wire segment is also disclosed.

  13. A quantitative method for evaluating alternatives. [aid to decision making

    NASA Technical Reports Server (NTRS)

    Forthofer, M. J.

    1981-01-01

    When faced with choosing between alternatives, people tend to use a number of criteria (often subjective, rather than objective) to decide which is the best alternative for them given their unique situation. The subjectivity inherent in the decision-making process can be reduced by the definition and use of a quantitative method for evaluating alternatives. This type of method can help decision makers achieve degree of uniformity and completeness in the evaluation process, as well as an increased sensitivity to the factors involved. Additional side-effects are better documentation and visibility of the rationale behind the resulting decisions. General guidelines for defining a quantitative method are presented and a particular method (called 'hierarchical weighted average') is defined and applied to the evaluation of design alternatives for a hypothetical computer system capability.

  14. Dissipation consistent fabric tensor definition from DEM to continuum for granular media

    NASA Astrophysics Data System (ADS)

    Li, X. S.; Dafalias, Y. F.

    2015-05-01

    In elastoplastic soil models aimed at capturing the impact of fabric anisotropy, a necessary ingredient is a measure of anisotropic fabric in the form of an evolving tensor. While it is possible to formulate such a fabric tensor based on indirect phenomenological observations at the continuum level, it is more effective and insightful to have the tensor defined first based on direct particle level microstructural observations and subsequently deduce a corresponding continuum definition. A practical means able to provide such observations, at least in the context of fabric evolution mechanisms, is the discrete element method (DEM). Some DEM defined fabric tensors such as the one based on the statistics of interparticle contact normals have already gained widespread acceptance as a quantitative measure of fabric anisotropy among researchers of granular material behavior. On the other hand, a fabric tensor in continuum elastoplastic modeling has been treated as a tensor-valued internal variable whose evolution must be properly linked to physical dissipation. Accordingly, the adaptation of a DEM fabric tensor definition to a continuum constitutive modeling theory must be thermodynamically consistent in regards to dissipation mechanisms. The present paper addresses this issue in detail, brings up possible pitfalls if such consistency is violated and proposes remedies and guidelines for such adaptation within a recently developed Anisotropic Critical State Theory (ACST) for granular materials.

  15. Discrete Element Modeling (DEM) of Triboelectrically Charged Particles: Revised Experiments

    NASA Technical Reports Server (NTRS)

    Hogue, Michael D.; Calle, Carlos I.; Curry, D. R.; Weitzman, P. S.

    2008-01-01

    In a previous work, the addition of basic screened Coulombic electrostatic forces to an existing commercial discrete element modeling (DEM) software was reported. Triboelectric experiments were performed to charge glass spheres rolling on inclined planes of various materials. Charge generation constants and the Q/m ratios for the test materials were calculated from the experimental data and compared to the simulation output of the DEM software. In this paper, we will discuss new values of the charge generation constants calculated from improved experimental procedures and data. Also, planned work to include dielectrophoretic, Van der Waals forces, and advanced mechanical forces into the software will be discussed.

  16. A new method for evaluating wax inhibitors and drag reducers

    SciTech Connect

    Hsu, J.J.C.; Brubaker, J.P.

    1995-12-01

    Conventional wax inhibitor evaluation methods such as cold finger and laminar flow loop are not adequate and accurate for evaluating wax inhibitors to be used in a wide operating temperature range and flow regimes such as North Sea subsea transport pipelines. A new method has been developed to simultaneously measure fluid rheology change and wax inhibition and to evaluate wax inhibitors or drag reducers at the field operating conditions. Selection criteria have been defined to search for an effective wax inhibitor. The criteria ensure the chemical selected is the most effective one for the specific oil and flow conditions. The operation cost savings by this accurate method is significant. Nine chemical companies joined the project of finding an wax inhibitor for a North Sea prospect. More than twenty wax inhibitors have been tested and evaluated with this new method for several waxy oil fields. The new method provides data of fluid rheology, war deposition rates and wax inhibition in the operating temperature range, overall average wax inhibition and degree of fluid flow improvement. These data are important to evaluate a wax inhibitor or drag reducer. Most of the wax inhibitors tested have good wax inhibition at high temperatures, but not many chemicals work well at low temperatures. The chemical tested may improved fluid flow behavior at low temperature but not wax deposition. Drag reducers tested did not work well at North Sea seabed temperature.

  17. Application of Bistatic TanDEM-X Interferometry to Measure Lava Flow Volume and Lava Extrusion Rates During the 2012-13 Tolbachik, Kamchatka Fissure Eruption

    NASA Astrophysics Data System (ADS)

    Kubanek, J.; Westerhaus, M.; Heck, B.

    2015-12-01

    Aerial imaging methods are a well approved source for mapping lava flows during eruptions and can serve as a base to assess the eruption dynamics and to determine the affected area. However, clouds and smoke often hinder optical systems like the Earth Observation Advanced Land Imager (EO-1-ALI, operated by NASA) to map lava flows properly, which hence affects its reliability. Furthermore, the amount of lava that is extruded during an eruption cannot be determined from optical images - however, it can significantly contribute to assess the accompanying hazard and risk. One way to monitor active lava flows is to quantify the topographic changes over time while using up-to-date high-resolution digital elevation models (DEMs). Whereas photogrammetric methods still fail when clouds and fume obstruct the sight, innovative radar satellite missions have the potential to generate high-resolution DEMs at any time. The innovative bistatic TanDEM-X (TerraSAR-X Add-on for Digital Elevation Measurements) satellite mission enables for the first time generating high-resolution DEMs from synthetic aperture radar satellite data repeatedly with reasonable costs and high resolution. The satellite mission consists of the two nearly identical satellites TerraSAR-X and TanDEM-X that build a large synthetic aperture radar interferometer with adaptable across- and along-track baselines aiming to generate topographic information globally. In the present study, we apply the TanDEM-X data to study the lava flows that were emplaced during the 2012-13 Tolbachik, Kamchatka fissure eruption. The eruption was composed of very fluid lava flows that effused along a northeast-southwest trending fissure. We used about fifteen bistatic data pairs to generate DEMs prior to, during, and after the eruption. The differencing of the DEMs enables mapping the lava flow field at different times. This allows measuring the extruded volume and to derive the changes in lava extrusion over time.

  18. User Experience Evaluation Methods in Product Development (UXEM'09)

    NASA Astrophysics Data System (ADS)

    Roto, Virpi; Väänänen-Vainio-Mattila, Kaisa; Law, Effie; Vermeeren, Arnold

    High quality user experience (UX) has become a central competitive factor of product development in mature consumer markets [1]. Although the term UX originated from industry and is a widely used term also in academia, the tools for managing UX in product development are still inadequate. A prerequisite for designing delightful UX in an industrial setting is to understand both the requirements tied to the pragmatic level of functionality and interaction and the requirements pertaining to the hedonic level of personal human needs, which motivate product use [2]. Understanding these requirements helps managers set UX targets for product development. The next phase in a good user-centered design process is to iteratively design and evaluate prototypes [3]. Evaluation is critical for systematically improving UX. In many approaches to UX, evaluation basically needs to be postponed until the product is fully or at least almost fully functional. However, in an industrial setting, it is very expensive to find the UX failures only at this phase of product development. Thus, product development managers and developers have a strong need to conduct UX evaluation as early as possible, well before all the parts affecting the holistic experience are available. Different types of products require evaluation on different granularity and maturity levels of a prototype. For example, due to its multi-user characteristic, a community service or an enterprise resource planning system requires a broader scope of UX evaluation than a microwave oven or a word processor that is meant for a single user at a time. Before systematic UX evaluation can be taken into practice, practical, lightweight UX evaluation methods suitable for different types of products and different phases of product readiness are needed. A considerable amount of UX research is still about the conceptual frameworks and models for user experience [4]. Besides, applying existing usability evaluation methods (UEMs) without

  19. Precise baseline determination for the TanDEM-X mission

    NASA Astrophysics Data System (ADS)

    Koenig, Rolf; Moon, Yongjin; Neumayer, Hans; Wermuth, Martin; Montenbruck, Oliver; Jäggi, Adrian

    The TanDEM-X mission will strive for generating a global precise Digital Elevation Model (DEM) by way of bi-static SAR in a close formation of the TerraSAR-X satellite, already launched on June 15, 2007, and the TanDEM-X satellite to be launched in May 2010. Both satellites carry the Tracking, Occultation and Ranging (TOR) payload supplied by the GFZ German Research Centre for Geosciences. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), and a Laser retro-reflector (LRR) for precise orbit determination (POD) and atmospheric sounding. The IGOR is of vital importance for the TanDEM-X mission objectives as the millimeter level determination of the baseline or distance between the two spacecrafts is needed to derive meter level accurate DEMs. Within the TanDEM-X ground segment GFZ is responsible for the operational provision of precise baselines. For this GFZ uses two software chains, first its Earth Parameter and Orbit System (EPOS) software and second the BERNESE software, for backup purposes and quality control. In a concerted effort also the German Aerospace Center (DLR) generates precise baselines independently with a dedicated Kalman filter approach realized in its FRNS software. By the example of GRACE the generation of baselines with millimeter accuracy from on-board GPS data can be validated directly by way of comparing them to the intersatellite K-band range measurements. The K-band ranges are accurate down to the micrometer-level and therefore may be considered as truth. Both TanDEM-X baseline providers are able to generate GRACE baselines with sub-millimeter accuracy. By merging the independent baselines by GFZ and DLR, the accuracy can even be increased. The K-band validation however covers solely the along-track component as the K-band data measure just the distance between the two GRACE satellites. In addition they inhibit an un-known bias which must be modelled in the comparison, so the

  20. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGESBeta

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  1. Evaluation of PCR-based beef sexing methods.

    PubMed

    Zeleny, Reinhard; Bernreuther, Alexander; Schimmel, Heinz; Pauwels, Jean

    2002-07-17

    Analysis of the sex of beef meat by fast and reliable molecular methods is an important measure to ensure correct allocation of export refunds, which are considerably higher for male beef meat. Two PCR-based beef sexing methods have been optimized and evaluated. The amelogenin-type method revealed excellent accuracy and robustness, whereas the bovine satellite/Y-chromosome duplex PCR procedure showed more ambiguous results. In addition, an interlaboratory comparison was organized to evaluate currently applied PCR-based sexing methods in European customs laboratories. From a total of 375 samples sent out, only 1 false result was reported (female identified as male). However, differences in the performances of the applied methods became apparent. The collected data contribute to specify technical requirements for a common European beef sexing methodology based on PCR. PMID:12105941

  2. Using analytic network process for evaluating mobile text entry methods.

    PubMed

    Ocampo, Lanndon A; Seva, Rosemary R

    2016-01-01

    This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods.

  3. Study on Turbulent Modeling in Gas Entrainment Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Ohshima, Hiroyuki; Nakamine, Yoshiaki; Imai, Yasutomo

    Suppression of gas entrainment (GE) phenomena caused by free surface vortices are very important to establish an economically superior design of the sodium-cooled fast reactor in Japan (JSFR). However, due to the non-linearity and/or locality of the GE phenomena, it is not easy to evaluate the occurrences of the GE phenomena accurately. In other words, the onset condition of the GE phenomena in the JSFR is not predicted easily based on scaled-model and/or partial-model experiments. Therefore, the authors are developing a CFD-based evaluation method in which the non-linearity and locality of the GE phenomena can be considered. In the evaluation method, macroscopic vortex parameters, e.g. circulation, are determined by three-dimensional CFD and then, GE-related parameters, e.g. gas core (GC) length, are calculated by using the Burgers vortex model. This procedure is efficient to evaluate the GE phenomena in the JSFR. However, it is well known that the Burgers vortex model tends to overestimate the GC length due to the lack of considerations on some physical mechanisms. Therefore, in this study, the authors develop a turbulent vortex model to evaluate the GE phenomena more accurately. Then, the improved GE evaluation method with the turbulent viscosity model is validated by analyzing the GC lengths observed in a simple experiment. The evaluation results show that the GC lengths analyzed by the improved method are shorter in comparison to the original method, and give better agreement with the experimental data.

  4. Coupling photogrammetric data with DFN-DEM model for rock slope hazard assessment

    NASA Astrophysics Data System (ADS)

    Donze, Frederic; Scholtes, Luc; Bonilla-Sierra, Viviana; Elmouttie, Marc

    2013-04-01

    fracture persistency in order to enhance the possible contribution of rock bridges on the failure surface development. It is believed that the proposed methodology can bring valuable complementary information for rock slope stability analysis in presence of complex fractured system for which classical "Factor of Safety" is difficult to express. References • Harthong B., Scholtès L. & F.V. Donzé, Strength characterization of rock masses, using a coupled DEM-DFN model, Geophysical Journal International, doi: 10.1111/j.1365-246X.2012.05642.x, 2012. • Kozicki J & Donzé FV. YADE-OPEN DEM: an open--source software using a discrete element method to simulate granular material, Engineering Computations, 26(7):786-805, 2009 • Kozicki J, Donzé FV. A new open-source software developed for numerical simulations using discrete modeling methods, Comp. Meth. In Appl. Mech. And Eng. 197:4429-4443, 2008. • Poropat, G.V., New methods for mapping the structure of rock masses. In Proceedings, Explo 2001, Hunter Valley, New South Wales, 28-31 October 2001, pp. 253-260, 2001. • Scholtès, L. & Donzé FV. Modelling progressive failure in fractured rock masses using a 3D discrete element method, International Journal of Rock Mechanics and Mining Sciences, 52:18-30, 2012a. • Scholtès, L. & Donzé, F.-V., DEM model for soft and hard rocks: role of grain interlocking on strength, J. Mech. Phys. Solids, doi: 10.1016/j.jmps.2012.10.005, 2012b. • Sirovision, Commonwealth Scientific and Industrial Research Organisation CSIRO, Siro3D Sirovision 3D Imaging Mapping System Manual Version 4.1, 2010

  5. DEM modelling, vegetation characterization and mapping of aspen parkland rangeland using LIDAR data

    NASA Astrophysics Data System (ADS)

    Su, Guangquan

    Detailed geographic information system (GIS) studies on plant ecology, animal behavior and soil hydrologic characteristics across spatially complex landscapes require an accurate digital elevation model (DEM). Following interpolation of last return LIDAR data and creation of a LIDAR-derived DEM, a series of 260 points, stratified by vegetation type, slope gradient and off-nadir distance, were ground-truthed using a total laser station, GPS, and 27 interconnected benchmarks. Despite an overall mean accuracy of +2 cm across 8 vegetation types, it created a RMSE (square root of the mean square error) of 1.21 m. DEM elevations were over-estimated within forested areas by an average of 20 cm with a RMSE of 1.05 m, under-estimated (-12 cm, RMSE = 1.36 m) within grasslands. Vegetation type had the greatest influence on DEM accuracy, while off-nadir distance (P = 0.48) and slope gradient (P = 0.49) did not influence DEM accuracy; however, the latter factors did interact (P < 0.10) to effect accuracy. Vegetation spatial structure (i.e., physiognomy) including plant height, cover, and vertical or horizontal heterogeneity, are important factors influencing biodiversity. Vegetation over and understory were sampled for height, canopy cover, and tree or shrub density within 120 field plots, evenly stratified by vegetation formation (grassland, shrubland, and aspen forest). Results indicated that LIDAR data could be used for estimating the maximum height, cover, and density, of both closed and semi-open stands of aspen (P < 0.001). However, LIDAR data could not be used to assess understory (<1.5 m) height within aspen stands, nor grass height and cover. Recognition and mapping of vegetation types are important for rangelands as they provide a basis for the development and evaluation of management policies and actions. In this study, LIDAR data were found to be superior to digital classification schedules for their mapping accuracy in aspen forest and grassland, but not shrubland

  6. Evaluation of AMOEBA: a spectral-spatial classification method

    USGS Publications Warehouse

    Jenson, Susan K.; Loveland, Thomas R.; Bryant, J.

    1982-01-01

    Muitispectral remotely sensed images have been treated as arbitrary multivariate spectral data for purposes of clustering and classifying. However, the spatial properties of image data can also be exploited. AMOEBA is a clustering and classification method that is based on a spatially derived model for image data. In an evaluation test, Landsat data were classified with both AMOEBA and a widely used spectral classifier. The test showed that irrigated crop types can be classified as accurately with the AMOEBA method as with the generally used spectral method ISOCLS; the AMOEBA method, however, requires less computer time.

  7. An evaluation of methods of analysis for alkylamino-oxomethanesulphates.

    PubMed

    Reddie, R N; Peters, D E

    1979-05-01

    Three possible ways of determining univalent cation salts of alkylamino-oxomethanesulphonic acids. R-NHCOSO(-)(3)M(+), were examined. Of these methods (gravimetric determination as the urea, as barium sulphate, or by an iodometric method), the iodometric method of estimating the bisulphite liberated from alkylamino-oxomethanesulphonates by decomposition with sodium hydroxide was finally selected and evaluated. Results obtained are in good agreement with theory for R = butyl and R = -(CH(2))(6)-. The iodometric method was equally applicable to polyurethane precursors. Free or excess of bisulphite (and accordingly total bisulphite) was determined successfully in the case of the polymeric adduct.

  8. Method of Best Representation for Averages in Data Evaluation

    SciTech Connect

    Birch, M. Singh, B.

    2014-06-15

    A new method for averaging data for which incomplete information is available is presented. For example, this method would be applicable during data evaluation where only the final outcomes of the experiments and the associated uncertainties are known. This method is based on using the measurements to construct a mean probability density for the data set. This “expected value method” (EVM) is designed to treat asymmetric uncertainties and has distinct advantages over other methods of averaging, including giving a more realistic uncertainty, being robust to outliers and consistent under various representations of the same quantity.

  9. A Method for Evaluating Dynamical Friction in Linear Ball Bearings

    PubMed Central

    Fujii, Yusaku; Maru, Koichi; Jin, Tao; Yupapin, Preecha P.; Mitatha, Somsak

    2010-01-01

    A method is proposed for evaluating the dynamical friction of linear bearings, whose motion is not perfectly linear due to some play in its internal mechanism. In this method, the moving part of a linear bearing is made to move freely, and the force acting on the moving part is measured as the inertial force given by the product of its mass and the acceleration of its centre of gravity. To evaluate the acceleration of its centre of gravity, the acceleration of two different points on it is measured using a dual-axis optical interferometer. PMID:22163457

  10. The topographic grain concept in DEM-based geomorphometric mapping

    NASA Astrophysics Data System (ADS)

    Józsa, Edina

    2016-04-01

    A common drawback of geomorphological analyses based on digital elevation datasets is the definition of search window size for the derivation of morphometric variables. The fixed-size neighbourhood determines the scale of the analysis and mapping, which can lead to the generalization of smaller surface details or the elimination of larger landform elements. The methods of DEM-based geomorphometric mapping are constantly developing into the direction of multi-scale landform delineation, but the optimal threshold for search window size is still a limiting factor. A possible way to determine the suitable value for the parameter is to consider the topographic grain principle (Wood, W. F. - Snell, J. B. 1960, Pike, R. J. et al. 1989). The calculation is implemented as a bash shell script for GRASS GIS to determine the optimal threshold for the r.geomorphon module. The approach relies on the potential of the topographic grain to detect the characteristic local ridgeline-to-channel spacing. By calculating the relative relief values with nested neighbourhood matrices it is possible to define a break-point where the increase rate of local relief encountered by the sample is significantly reducing. The geomorphons approach (Jasiewicz, J. - Stepinski, T. F. 2013) is a cell-based DEM classification method for the identification of landform elements at a broad range of scales by using line-of-sight technique. The landforms larger than the maximum lookup distance are broken down to smaller elements therefore the threshold needs to be set for a relatively large value. On the contrary, the computational requirements and the size of the study sites determine the upper limit for the value. Therefore the aim was to create a tool that would help to determine the optimal parameter for r.geomorphon tool. As a result it would be possible to produce more objective and consistent maps with achieving the full efficiency of this mapping technique. For the thorough analysis on the

  11. Multi-criteria evaluation methods in the production scheduling

    NASA Astrophysics Data System (ADS)

    Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.

    2016-08-01

    The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.

  12. Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods

    PubMed Central

    2010-01-01

    Background Schools are key settings for health promotion (HP) but the development of suitable approaches for evaluating HP in schools is still a major topic of discussion. This article presents a research protocol of a program developed to evaluate HP. After reviewing HP evaluation issues, the various possible approaches are analyzed and the importance of a realistic evaluation framework and a mixed methods (MM) design are demonstrated. Methods/Design The design is based on a systemic approach to evaluation, taking into account the mechanisms, context and outcomes, as defined in realistic evaluation, adjusted to our own French context using an MM approach. The characteristics of the design are illustrated through the evaluation of a nationwide HP program in French primary schools designed to enhance children's social, emotional and physical health by improving teachers' HP practices and promoting a healthy school environment. An embedded MM design is used in which a qualitative data set plays a supportive, secondary role in a study based primarily on a different quantitative data set. The way the qualitative and quantitative approaches are combined through the entire evaluation framework is detailed. Discussion This study is a contribution towards the development of suitable approaches for evaluating HP programs in schools. The systemic approach of the evaluation carried out in this research is appropriate since it takes account of the limitations of traditional evaluation approaches and considers suggestions made by the HP research community. PMID:20109202

  13. Entwicklungsperspektiven von Social Software und dem Web 2.0

    NASA Astrophysics Data System (ADS)

    Raabe, Alexander

    Der Artikel beschäftigt sich zunächst mit dem derzeitigen und zukünftigen Einsatz von Social Software in Unternehmen. Nach dem großen Erfolg von Social Software im Web beginnen viele Unternehmen eigene Social Software-Initiativen zu entwickeln. Der Artikel zeigt die derzeit wahrgenommenen Einsatzmöglichkeiten von Social Software im Unternehmen auf, erörtert Erfolgsfaktoren für die Einführung und präsentiert mögliche Wege für die Zukunft. Nach der Diskussion des Spezialfalles Social Software in Unternehmen werden anschließend die globalen Trends und Zukunftsperspektiven des Web 2.0 in ihren technischen, wirtschaftlichen und sozialen Dimensionen dargestellt. Wie aus den besprochenen Haupttrends hervorgeht, wird die Masse an digital im Web verfügbaren Informationen stetig weiterwachsen. So stellt sich die Frage, wie es in Zukunft möglich sein wird, die Qualität der Informationssuche und der Wissensgenerierung zu verbessern. Mit dem Einsatz von semantischen Technologien im Web wird hier eine revolutionäre Möglichkeit geboten, Informationen zu filtern und intelligente, gewissermaßen verstehende" Anwendungen zu entwerfen. Auf dem Weg zu einem intelligenten Web werden sich das Semantic Web und Social Software annähern: Anwendungen wie Semantic Wikis, Semantic Weblogs, lightweight Semantic Web-Sprachen wie Microformats oder auch kommerzielle Angebote wie Freebase von Metaweb werden die ersten Vorzeichen einer dritten Generation des Webs sein.

  14. Spatial Characterization of Landscapes through Multifractal Analysis of DEM

    PubMed Central

    Aguado, P. L.; Del Monte, J. P.; Moratiel, R.; Tarquis, A. M.

    2014-01-01

    Landscape evolution is driven by abiotic, biotic, and anthropic factors. The interactions among these factors and their influence at different scales create a complex dynamic. Landscapes have been shown to exhibit numerous scaling laws, from Horton's laws to more sophisticated scaling of heights in topography and river network topology. This scaling and multiscaling analysis has the potential to characterise the landscape in terms of the statistical signature of the measure selected. The study zone is a matrix obtained from a digital elevation model (DEM) (map 10 × 10 m, and height 1 m) that corresponds to homogeneous region with respect to soil characteristics and climatology known as “Monte El Pardo” although the water level of a reservoir and the topography play a main role on its organization and evolution. We have investigated whether the multifractal analysis of a DEM shows common features that can be used to reveal the underlying patterns and information associated with the landscape of the DEM mapping and studied the influence of the water level of the reservoir on the applied analysis. The results show that the use of the multifractal approach with mean absolute gradient data is a useful tool for analysing the topography represented by the DEM. PMID:25177728

  15. SAR interferometry for DEM generation: wide-area error assessment

    NASA Astrophysics Data System (ADS)

    Carrasco, Daniel; Broquetas, Antoni; Pena, Ramon; Arbiol, Roman; Castillo, Manuel; Pala, Vincenc

    1998-11-01

    The present work consists on the generation of a DEM using ERS satellites interferometric data over a wide area (50 X 50 Km) with an error study using a high accuracy reference DEM, focusing on the atmosphere induced errors. The area is heterogeneous with flat and rough topography ranging from sea level up to 1200 m in the inland ranges. The ERS image has a 100 X 100 Km2 area and has been divided in four quarters to ease the processing. The phase unwrapping algorithm, which is a combination of region growing and least squares techniques, worked out successfully the rough topography areas. One quarter of the full scene was geocoded over a local datum ellipsoid to a UTM grid. The resulting DEM was compared to a reference one provided by the Institut Cartografic de Catalunya. Two types of atmospheric error or artifacts were found: a set of very localized spots, up to one phase cycle, which generated ghost hills up to 100, and a slow trend effect which added up to 50 m to some areas in the image. Besides of the atmospheric errors, the quality of the DEM was assessed. The quantitative error study was carried out locally at several areas with different topography.

  16. DEMS - a second generation diabetes electronic management system.

    PubMed

    Gorman, C A; Zimmerman, B R; Smith, S A; Dinneen, S F; Knudsen, J B; Holm, D; Jorgensen, B; Bjornsen, S; Planet, K; Hanson, P; Rizza, R A

    2000-06-01

    Diabetes electronic management system (DEMS) is a component-based client/server application, written in Visual C++ and Visual Basic, with the database server running Sybase System 11. DEMS is built entirely with a combination of dynamic link libraries (DLLs) and ActiveX components - the only exception is the DEMS.exe. DEMS is a chronic disease management system for patients with diabetes. It is used at the point of care by all members of the diabetes team including physicians, nurses, dieticians, clinical assistants and educators. The system is designed for maximum clinical efficiency and facilitates appropriately supervised delegation of care. Dispersed clinical sites may be supervised from a central location. The system is designed for ease of navigation; immediate provision of many types of automatically generated reports; quality audits; aids to compliance with good care guidelines; and alerts, advisories, prompts, and warnings that guide the care provider. The system now contains data on over 34000 patients and is in daily use at multiple sites.

  17. Thallium lung-to-heart quantification: three methods of evaluation

    SciTech Connect

    Harler, M.B.; Mahoney, M.; Bartlett, B.; Patel, K.; Turbiner, E.

    1986-12-01

    Lung-to-heart quantification, when used in conjunction with visual assessment of /sup 201/Tl stress test images, has been found useful in diagnosing cardiac dysfunction. The authors evaluated three methods of quantification in terms of inter- and intraobserver variability and reproducibility. Fifty anterior /sup 201/Tl stress images were quantified by each of the following methods: Method A (sum region), which involved one region of interest (ROI) in the measurement of pulmonary activity relative to that of the myocardium; Method B (count density), which required two ROIs, the lung-to-heart ratio being dependent on count density; and Method C (maximum pixel), which used the gray scale of the computer to determine the most intense pixels in the lung field and myocardium. Statistical evaluation has shown that the three methods assess clinical data equally well. Method C was found to be most reproducible in terms of inter- and intraobserver variability followed by Methods A and B. Although nearly equivalent in terms of statistics, the three methods possess inherent differences and therefore should not be used interchangeably without conversion factors.

  18. Using Mixed Methods in Health Information Technology Evaluation.

    PubMed

    Sockolow, Paulina; Dowding, Dawn; Randell, Rebecca; Favela, Jesus

    2016-01-01

    With the increasing adoption of interactive systems in healthcare, there is a need to ensure that the benefits of such systems are formally evaluated. Traditionally quantitative research approaches have been used to gather evidence on measurable outcomes of health technology. Qualitative approaches have also been used to analyze how or why particular interventions did or did not work in specific healthcare contexts. Mixed methods research provides a framework for carrying out both quantitative and qualitative approaches within a single research study. In this paper an international group of four informatics scholars illustrate some of the benefits and challenges of using mixed methods in evaluation. The diversity of the research experience provides a broad overview of approaches in combining robust analysis of outcome data with qualitative methods that provide an understanding of the processes through which, and the contexts in which, those outcomes are achieved. This paper discussed the benefits that mixed methods brought to each study. PMID:27332167

  19. A 3D DEM-LBM approach for the assessment of the quick condition for sands

    NASA Astrophysics Data System (ADS)

    Mansouri, M.; Delenne, J.-Y.; El Youssoufi, M. S.; Seridi, A.

    2009-09-01

    We present a 3D numerical model to assess the quick condition (the onset of the boiling phenomenon) in a saturated polydisperse granular material. We use the Discrete Element Method (DEM) to study the evolution of the vertical intergranular stress in a granular sample subjected to an increasing hydraulic gradient. The hydrodynamic forces on the grains of the sample are computed using the Lattice Boltzmann Method (LBM). The principal assumption used is that grains remain at rest until the boiling onset. We show that the obtained critical hydraulic gradient is close to that defined in classical soil mechanics. To cite this article: M. Mansouri et al., C. R. Mecanique 337 (2009).

  20. Participatory Training Evaluation Method (PATEM) as a Collaborative Evaluation Capacity Building Strategy

    ERIC Educational Resources Information Center

    Kuzmin, Alexey

    2012-01-01

    This article describes Participatory Training Evaluation Method (PATEM) of measuring participants' reaction to the training. PATEM provides rich information; allows to document evaluation findings; becomes organic part of the training that helps participants process their experience individually and as a group; makes sense to participants; is an…

  1. Cost/benefit analysis method for evaluating coal cleaning alternatives

    SciTech Connect

    Terry, R.

    1981-01-01

    A general analytical method that can be used to evaluate multiple coal cleaning alternatives by comparing their costs and benefits is described. The benefits and costs of coal cleaning are briefly reviewed. To illustrate how the general method may be applied to a given set of conditions, a specific case is presented. Certain non-quantifiable factors that should be considered during the evalution process are identified.

  2. A quantitative method for visual phantom image quality evaluation

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.; Liu, Xiong; O'Shea, Michael; Toto, Lawrence C.

    2000-04-01

    This work presents an image quality evaluation technique for uniform-background target-object phantom images. The Degradation-Comparison-Threshold (DCT) method involves degrading the image quality of a target-containing region with a blocking processing and comparing the resulting image to a similarly degraded target-free region. The threshold degradation needed for 92% correct detection of the target region is the image quality measure of the target. Images of American College of Radiology (ACR) mammography accreditation program phantom were acquired under varying x-ray conditions on a digital mammography machine. Five observers performed ACR and DCT evaluations of the images. A figure-of-merit (FOM) of an evaluation method was defined which takes into account measurement noise and the change of the measure as a function of x-ray exposure to the phantom. The FOM of the DCT method was 4.1 times that of the ACR method for the specks, 2.7 times better for the fibers and 1.4 times better for the masses. For the specks, inter-reader correlations on the same image set increased significantly from 87% for the ACR method to 97% for the DCT method. The viewing time per target for the DCT method was 3 - 5 minutes. The observed greater sensitivity of the DCT method could lead to more precise Quality Control (QC) testing of digital images, which should improve the sensitivity of the QC process to genuine image quality variations. Another benefit of the method is that it can measure the image quality of high detectability target objects, which is impractical by existing methods.

  3. DOE methods for evaluating environmental and waste management samples

    SciTech Connect

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

  4. Evaluation of Two Methods to Estimate and Monitor Bird Populations

    PubMed Central

    Taylor, Sandra L.; Pollard, Katherine S.

    2008-01-01

    Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775

  5. Force Evaluation in the Lattice Boltzmann Method Involving Curved Geometry

    NASA Technical Reports Server (NTRS)

    Mei, Renwei; Yu, Dazhi; Shyy, Wei; Luo, Li-Shi; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The present work investigates two approaches for force evaluation in the lattice Boltzmann equation: the momentum- exchange method and the stress-integration method on the surface of a body. The boundary condition for the particle distribution functions on curved geometries is handled with second order accuracy based on our recent works. The stress-integration method is computationally laborious for two-dimensional flows and in general difficult to implement for three-dimensional flows, while the momentum-exchange method is reliable, accurate, and easy to implement for both two-dimensional and three-dimensional flows. Several test cases are selected to evaluate the present methods, including: (i) two-dimensional pressure-driven channel flow; (ii) two-dimensional uniform flow past a column of cylinders; (iii) two-dimensional flow past a cylinder asymmetrically placed in a channel (with vortex shedding); (iv) three-dimensional pressure-driven flow in a circular pipe; and (v) three-dimensional flow past a sphere. The drag evaluated by using the momentum-exchange method agrees well with the exact or other published results.

  6. Force evaluation in the lattice Boltzmann method involving curved geometry

    NASA Astrophysics Data System (ADS)

    Mei, Renwei; Yu, Dazhi; Shyy, Wei; Luo, Li-Shi

    2002-04-01

    The present work investigates two approaches for force evaluation in the lattice Boltzmann equation: the momentum-exchange method and the stress-integration method on the surface of a body. The boundary condition for the particle distribution functions on curved geometries is handled with second-order accuracy based on our recent works [Mei et al., J. Comput. Phys. 155, 307 (1999); ibid. 161, 680 (2000)]. The stress-integration method is computationally laborious for two-dimensional flows and in general difficult to implement for three-dimensional flows, while the momentum-exchange method is reliable, accurate, and easy to implement for both two-dimensional and three-dimensional flows. Several test cases are selected to evaluate the present methods, including: (i) two-dimensional pressure-driven channel flow; (ii) two-dimensional uniform flow past a column of cylinders; (iii) two-dimensional flow past a cylinder asymmetrically placed in a channel (with vortex shedding); (iv) three-dimensional pressure-driven flow in a circular pipe; and (v) three-dimensional flow past a sphere. The drag evaluated by using the momentum-exchange method agrees well with the exact or other published results.

  7. Nondestructive methods for quality evaluation of livestock products.

    PubMed

    Narsaiah, K; Jha, Shyam N

    2012-06-01

    The muscles derived from livestock are highly perishable. Rapid and nondestructive methods are essential for quality assurance of such products. Potential nondestructive methods, which can supplement or replace many of traditional time consuming destructive methods, include colour and computer image analysis, NIR spectroscopy, NMRI, electronic nose, ultrasound, X-ray imaging and biosensors. These methods are briefly described and the research work involving them for products derived from livestock is reviewed. These methods will be helpful in rapid screening of large number of samples, monitoring distribution networks, quick product recall and enhance traceability in the value chain of livestock products. With new developments in the areas of basic science related to these methods, colour, image processing, NIR spectroscopy, biosensors and ultrasonic analysis are expected to be widespread and cost effective for large scale meat quality evaluation in near future.

  8. [Methods of dosimetry in evaluation of electromagnetic fields' biological action].

    PubMed

    Rubtsova, N B; Perov, S Iu

    2012-01-01

    Theoretical and experimental dosimetry can be used for adequate evaluation of the effects of radiofrequency electromagnetic fields. In view of the tough electromagnetic environment in aircraft, pilots' safety is of particular topicality. The dosimetric evaluation is made from the quantitative characteristics of the EMF interaction with bio-objects depending on EM energy absorption in a unit of tissue volume or mass calculated as a specific absorbed rate (SAR) and measured in W/kg. Theoretical dosimetry employs a number of computational methods to determine EM energy, as well as the augmented method of boundary conditions, iterative augmented method of boundary conditions, moments method, generalized multipolar method, finite-element method, time domain finite-difference method, and hybrid methods combining several decision plans modeling the design philosophy of navigation, radiolocation and human systems. Because of difficulties with the experimental SAR estimate, theoretical dosimetry is regarded as the first step in analysis of the in-aircraft conditions of exposure and possible bio-effects.

  9. A Safety Index and Method for Flightdeck Evaluation

    NASA Technical Reports Server (NTRS)

    Latorella, Kara A.

    2000-01-01

    If our goal is to improve safety through machine, interface, and training design, then we must define a metric of flightdeck safety that is usable in the design process. Current measures associated with our notions of "good" pilot performance and ultimate safety of flightdeck performance fail to provide an adequate index of safe flightdeck performance for design evaluation purposes. The goal of this research effort is to devise a safety index and method that allows us to evaluate flightdeck performance holistically and in a naturalistic experiment. This paper uses Reason's model of accident causation (1990) as a basis for measuring safety, and proposes a relational database system and method for 1) defining a safety index of flightdeck performance, and 2) evaluating the "safety" afforded by flightdeck performance for the purpose of design iteration. Methodological considerations, limitations, and benefits are discussed as well as extensions to this work.

  10. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  11. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    SciTech Connect

    Koning, A.J.

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  12. METHODS FOR EVALUATING THE SUSTAINABILITY OF GREEN PROCESSES

    EPA Science Inventory

    Methods for Evaluating the Sustainability of Green Processes

    By Raymond L. Smith and Michael A. Gonzalez
    U.S. Environmental Protection Agency
    Office of Research and Development
    26 W. Martin Luther King Dr.
    Cincinnati, OH 45268 USA

    Theme: New Challenges...

  13. Evaluation of Alternative Difference-in-Differences Methods

    ERIC Educational Resources Information Center

    Yu, Bing

    2013-01-01

    Difference-in-differences (DID) strategies are particularly useful for evaluating policy effects in natural experiments in which, for example, a policy affects some schools and students but not others. However, the standard DID method may produce biased estimation of the policy effect if the confounding effect of concurrent events varies by…

  14. Methods of Evaluating Child Welfare in Indian Country: An Illustration

    ERIC Educational Resources Information Center

    Fox, Kathleen; Cross, Terry L.; John, Laura; Carter, Patricia; Pavkov, Thomas; Wang, Ching-Tung; Diaz, Javier

    2011-01-01

    The poor quality and quantity of data collected in tribal communities today reflects a lack of true community participation and commitment. This is especially problematic for evaluation studies, in which the needs and desires of the community should be the central focus. This challenge can be met by emphasizing indigenous methods and voice. The…

  15. EVALUATION OF TWO METHODS FOR PREDICTION OF BIOACCUMULATION FACTORS

    EPA Science Inventory

    Two methods for deriving bioaccumulation factors (BAFs) used by the U.S. Environmental Protection Agency (EPA) in development of water quality criteria were evaluated using polychlorinated biphenyls (PCB) data from the Hudson River and Green Bay ecosystems. Greater than 90% of th...

  16. Program Evaluation of the Sustainability of Teaching Methods

    ERIC Educational Resources Information Center

    Bray, Cathy

    2008-01-01

    This paper suggests a particular question that higher education researchers might ask: "Do educational programs use teaching methods that are environmentally, socially and economically sustainable?" It further proposes that program evaluation research (PER) can be used to answer the question. Consideration is given to: a) program evaluation…

  17. Holistic Evaluation of Lightweight Operating Systems using the PERCU Method

    SciTech Connect

    Kramer, William T.C.; He, Yun; Carter, Jonathan; Glenski, Joseph; Rippe, Lynn; Cardo, Nicholas

    2008-05-01

    The scale of Leadership Class Systems presents unique challenges to the features and performance of operating system services. This paper reports results of comprehensive evaluations of two Light Weight Operating Systems (LWOS), Cray's Catamount Virtual Node (CVN) and Linux Environment (CLE) operating systems, on the exact same large-scale hardware. The evaluation was carried out over a 5-month period on NERSC's 19,480 core Cray XT-4, Franklin, using a comprehensive evaluation method that spans Performance, Effectiveness, Reliability, Consistency and Usability criteria for all major subsystems and features. The paper presents the results of the comparison between CVN and CLE, evaluates their relative strengths, and reports observations regarding the world's largest Cray XT-4 as well.

  18. Evaluating a physician leadership development program - a mixed methods approach.

    PubMed

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study. PMID:27119393

  19. Evaluating a physician leadership development program - a mixed methods approach.

    PubMed

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study.

  20. Robust flicker evaluation method for low power adaptive dimming LCDs

    NASA Astrophysics Data System (ADS)

    Kim, Seul-Ki; Song, Seok-Jeong; Nam, Hyoungsik

    2015-05-01

    This paper describes a robust dimming flicker evaluation method of adaptive dimming algorithms for low power liquid crystal displays (LCDs). While the previous methods use sum of square difference (SSD) values without excluding the image sequence information, the proposed modified SSD (mSSD) values are obtained only with the dimming flicker effects by making use of differential images. The proposed scheme is verified for eight dimming configurations of two dimming level selection methods and four temporal filters over three test videos. Furthermore, a new figure of merit is introduced to cover the dimming flicker as well as image qualities and power consumption.

  1. High mobility of large mass movements: a study by means of FEM/DEM simulations

    NASA Astrophysics Data System (ADS)

    Manzella, I.; Lisjak, A.; Grasselli, G.

    2013-12-01

    Large mass movements, such as rock avalanches and large volcanic debris avalanches are characterized by extremely long propagation, which cannot be modelled using normal sliding friction law. For this reason several studies and theories derived from field observation, physical theories and laboratory experiments, exist to try to explain their high mobility. In order to investigate more into deep some of the processes recalled by these theories, simulations have been run with a new numerical tool called Y-GUI based on the Finite Element-Discrete Element Method FEM/DEM. The FEM/DEM method is a numerical technique developed by Munjiza et al. (1995) where Discrete Element Method (DEM) algorithms are used to model the interaction between different solids, while Finite Element Method (FEM) principles are used to analyze their deformability being also able to explicitly simulate material sudden loss of cohesion (i.e. brittle failure). In particular numerical tests have been run, inspired by the small-scale experiments done by Manzella and Labiouse (2013). They consist of rectangular blocks released on a slope; each block is a rectangular discrete element made of a mesh of finite elements enabled to fragment. These simulations have highlighted the influence on the propagation of block packing, i.e. whether the elements are piled into geometrical ordinate structure before failure or they are chaotically disposed as a loose material, and of the topography, i.e. whether the slope break is smooth and regular or not. In addition the effect of fracturing, i.e. fragmentation, on the total runout have been studied and highlighted.

  2. A Rapid Usability Evaluation (RUE) Method for Health Information Technology.

    PubMed

    Russ, Alissa L; Baker, Darrell A; Fahner, W Jeffrey; Milligan, Bryce S; Cox, Leeann; Hagg, Heather K; Saleem, Jason J

    2010-11-13

    Usability testing can help generate design ideas to enhance the quality and safety of health information technology. Despite these potential benefits, few healthcare organizations conduct systematic usability testing prior to software implementation. We used a Rapid Usability Evaluation (RUE) method to apply usability testing to software development at a major VA Medical Center. We describe the development of the RUE method, provide two examples of how it was successfully applied, and discuss key insights gained from this work. Clinical informaticists with limited usability training were able to apply RUE to improve software evaluation and elected to continue to use this technique. RUE methods are relatively simple, do not require advanced training or usability software, and should be easy to adopt. Other healthcare organizations may be able to implement RUE to improve software effectiveness, efficiency, and safety.

  3. Coastal Digital Elevation Models (DEMs) for tsunami hazard assessment on the French coasts

    NASA Astrophysics Data System (ADS)

    Maspataud, Aurélie; Biscara, Laurie; Hébert, Hélène; Schmitt, Thierry; Créach, Ronan

    2015-04-01

    bathymetric and topographic data, …) were gathered. Consequently, datasets were first assessed internally for both quality and accuracy and then externally with other to ensure consistency and gradual topographic/bathymetric transitioning along limits of the datasets. The heterogeneous ages of the input data also stress the importance of taking into account the temporal variability of bathymetric features, especially in the active areas (sandbanks, estuaries, channels). Locally, gaps between marine (hydrographic surveys) and terrestrial (topographic LIDAR) data have required the introduction of new methods and tools to solve interpolation. Through these activities the goal is to improve the production line and to enhance tools and procedures used for the improvement of processing, validation and qualification algorithms of bathymetric data, data collection work, automation of processing and integration process for conception of improved both bathymetric and topographic DEMs, merging data collected. This work is supported by a French ANR program in the frame of "Investissements d'Avenir", under the grant ANR-11-RSNR-00023-01.

  4. A new method to evaluate human-robot system performance

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  5. Development of evaluation method for software hazard identification techniques

    SciTech Connect

    Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.

    2006-07-01

    This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)

  6. Operator performance evaluation using multi criteria decision making methods

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah

    2014-06-01

    Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.

  7. A new method to evaluate human-robot system performance.

    PubMed

    Rodriguez, G; Weisbin, C R

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  8. A novel objective evaluation method for trunk function

    PubMed Central

    Kinoshita, Kazuaki; Hashimoto, Masashi; Ishida, Kazunari; Yoneda, Yuki; Naka, Yuta; Kitanishi, Hideyuki; Oyagi, Hirotaka; Hoshino, Yuichi; Shibanuma, Nao

    2015-01-01

    [Purpose] To investigate whether an objective evaluation method for trunk function, namely the “trunk righting test”, is reproducible and reliable by testing on different observers (from experienced to beginners) and by confirming the test-retest reliability. [Subjects] Five healthy subjects were evaluated in this correlation study. [Methods] A handheld dynamometer was used in the assessments. The motor task was a trunk righting motion by moving the part with the sensor pad 10 cm outward from the original position. During measurement, the posture was held at maximum effort for 5 s. Measurement was repeated three times. Interexaminer reproducibility was examined in two physical therapists with 1 year experience and one physical therapist with 7 years of experience. The measured values were evaluated for reliability by using intraclass correlation coefficients (ICC 1.1) and interclass correlation coefficients (ICC 2.1). [Results] The test-retest reliability ICC 1.1 and ICC 2.1 were all high. The ICC 1.1 was >0.90. The ICC 2.1 was 0.93. [Conclusion] We developed the trunk righting test as a novel objective evaluation method for trunk function. As the study included inexperienced therapists, the results suggest that the trunk righting test could be used in the clinic, independent of the experience of the therapists. PMID:26157279

  9. An IMU Evaluation Method Using a Signal Grafting Scheme.

    PubMed

    Niu, Xiaoji; Wang, Qiang; Li, You; Zhang, Quan; Jiang, Peng

    2016-01-01

    As various inertial measurement units (IMUs) from different manufacturers appear every year, it is not affordable to evaluate every IMU through tests. Therefore, this paper presents an IMU evaluation method by grafting data from the tested IMU to the reference data from a higher-grade IMU. The signal grafting (SG) method has several benefits: (a) only one set of field tests with a higher-grade IMU is needed, and can be used to evaluate numerous IMUs. Thus, SG is effective and economic because all data from the tested IMU is collected in the lab; (b) it is a general approach to compare navigation performances of various IMUs by using the same reference data; and, finally, (c) through SG, one can first evaluate an IMU in the lab, and then decide whether to further test it. Moreover, this paper verified the validity of SG to both medium- and low-grade IMUs, and presents and compared two SG strategies, i.e., the basic-error strategy and the full-error strategy. SG provided results similar to field tests, with a difference of under 5% and 19.4%-26.7% for tested tactical-grade and MEMS IMUs. Meanwhile, it was found that dynamic IMU errors were essential to guarantee the effect of the SG method. PMID:27294932

  10. [Methods of evaluating labor progress in contemporary obstetrics].

    PubMed

    Głuszak, Michał; Fracki, Stanisław; Wielgoś, Mirosław; Wegrzyn, Piotr

    2013-08-01

    Assessment of progress in labor is one of the foremost problems in obstetrics. Obstructed labor increases danger to maternal and fetal life and health, and may be caused by birth canal pathologies, as well as inefficient uterine contractions or failure of cervical dilation. Such obstructions require the use of vacuum extraction, forceps, or a Caesarean section. Operative delivery should be performed only when specifically indicated. Conversely postponing an operative delivery when the procedure is necessary is detrimental to the neonatal outcome. Therefore, it is advisable to make the decision on the basis of objective, measurable parameters. Methods of evaluating the risk of labor disorders have evolved over the years. Currently ultrasonography is used for fetal biometric measurements and weight estimation. It helps to evaluate the risk of labor disorders. This method, however is limited by a relatively large measurement error At present, vaginal examination is still the primary method of evaluating labor progress, although the technique is known to be operator-dependent and poorly reproducible. Recent publications suggest that intrapartum translabial ultrasonography is more accurate and allows for an objective assessment of labor progress. Recent studies have evaluated fetal head engagement based on the following parameters: angle between the pubic symphysis and fetal head, distance between the presenting point and the interspinous line and fetal head direction in the birth canal. Each of the described parameters allowed for an objective assessment of head engagement but no advantage of any particular parameter has been revealed so far. PMID:24191505

  11. New feedback detection method for performance evaluation of hearing aids

    NASA Astrophysics Data System (ADS)

    Shin, Mincheol; Wang, Semyung; Bentler, Ruth A.; He, Shuman

    2007-04-01

    New objective and accurate feedback detection method, transfer function variation criterion (TVC), has been developed for evaluating the performance of feedback cancellation techniques. The proposed method is able to classify stable, unstable, and sub-oscillatory stages of feedback in hearing aids. The sub-oscillatory stage is defined as a state where the hearing aid user may perceive distortion of sound quality without the occurrence of oscillation. This detection algorithm focuses on the transfer function variation of hearing aids and the relationship between system stability and feedback oscillation. The transfer functions are obtained using the FIR Wiener filtering algorithm off-line. An anechoic test box is used for the exact and reliable evaluation of different hearing aids. The results are listed and compared with the conventional power concentration ratio (PCR), which has been generally adopted as a feedback detection method for the performance evaluation of hearing aids. The possibility of real-time implementation is discussed in terms of a more convenient and exact performance evaluation of feedback cancellation techniques.

  12. An IMU Evaluation Method Using a Signal Grafting Scheme

    PubMed Central

    Niu, Xiaoji; Wang, Qiang; Li, You; Zhang, Quan; Jiang, Peng

    2016-01-01

    As various inertial measurement units (IMUs) from different manufacturers appear every year, it is not affordable to evaluate every IMU through tests. Therefore, this paper presents an IMU evaluation method by grafting data from the tested IMU to the reference data from a higher-grade IMU. The signal grafting (SG) method has several benefits: (a) only one set of field tests with a higher-grade IMU is needed, and can be used to evaluate numerous IMUs. Thus, SG is effective and economic because all data from the tested IMU is collected in the lab; (b) it is a general approach to compare navigation performances of various IMUs by using the same reference data; and, finally, (c) through SG, one can first evaluate an IMU in the lab, and then decide whether to further test it. Moreover, this paper verified the validity of SG to both medium- and low-grade IMUs, and presents and compared two SG strategies, i.e., the basic-error strategy and the full-error strategy. SG provided results similar to field tests, with a difference of under 5% and 19.4%–26.7% for tested tactical-grade and MEMS IMUs. Meanwhile, it was found that dynamic IMU errors were essential to guarantee the effect of the SG method. PMID:27294932

  13. [A Standing Balance Evaluation Method Based on Largest Lyapunov Exponent].

    PubMed

    Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang; Zhao, Qing

    2015-12-01

    In order to evaluate the ability of human standing balance scientifically, we in this study proposed a new evaluation method based on the chaos nonlinear analysis theory. In this method, a sinusoidal acceleration stimulus in forward/backward direction was forced under the subjects' feet, which was supplied by a motion platform. In addition, three acceleration sensors, which were fixed to the shoulder, hip and knee of each subject, were applied to capture the balance adjustment dynamic data. Through reconstructing the system phase space, we calculated the largest Lyapunov exponent (LLE) of the dynamic data of subjects' different segments, then used the sum of the squares of the difference between each LLE (SSDLLE) as the balance capabilities evaluation index. Finally, 20 subjects' indexes were calculated, and compared with evaluation results of existing methods. The results showed that the SSDLLE were more in line with the subjects' performance during the experiment, and it could measure the body's balance ability to some extent. Moreover, the results also illustrated that balance level was determined by the coordinate ability of various joints, and there might be more balance control strategy in the process of maintaining balance. PMID:27079089

  14. Comparative evaluation of patellar height methods in the Brazilian population☆

    PubMed Central

    Behrendt, Christian; Zaluski, Alexandre; e Albuquerque, Rodrigo Pires; de Sousa, Eduardo Branco; Cavanellas, Naasson

    2015-01-01

    Objective The methods most used for patellar height measurement were compared with the plateau–patella angle method. Methods A cross-sectional study was conducted, in which lateral-view radiographs of the knee were evaluated using the three methods already established in the literature: Insall–Salvati (IS), Blackburne–Peel (BP) and Caton–Deschamps (CD). These were compared with the plateau–patella angle method. One hundred and ninety-six randomly selected patients were included in the sample. Results The data were initially evaluated using the chi-square test. This analysis was deemed to be positive with p < 0.0001. We compared the traditional methods with the plateau–patella angle measurement, using Fisher's exact test. In comparing the IS index with the plateau–patella angle, we did not find any statistically significant differences in relation to the proportion of altered cases between the two groups. The traditional methods were compared with the plateau–patella angle with regard to the proportions of cases of high and low patella, by means of Fisher's exact test. This analysis showed that the plateau–patella angle identified fewer cases of high patella than did the IS, BP and CD methods, but more cases of low patella. In comparing pairs, we found that the IS and CD indices were capable of identifying more cases of high patella than was the plateau–patella angle. In relation to the cases of low patella, the plateau–patella angle was capable of identifying more cases than were the other three methods. Conclusions The plateau–patella angle found more patients with low patella than did the classical methods and showed results that diverged from those of the other indices studied. PMID:26962492

  15. A novel method to evaluate spin diffusion length of Pt

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-qing; Sun, Niu-yi; Che, Wen-ru; Shan, Rong; Zhu, Zhen-gang

    2016-05-01

    Spin diffusion length of Pt is evaluated via proximity effect of spin orbit coupling (SOC) and anomalous Hall effect (AHE) in Pt/Co2FeAl bilayers. By varying the thicknesses of Pt and Co2FeAl layer, the thickness dependences of AHE parameters can be obtained, which are theoretically predicted to be proportional to the square of the SOC strength. According to the physical image of the SOC proximity effect, the spin diffusion length of Pt can easily be identified from these thickness dependences. This work provides a novel method to evaluate spin diffusion length in a material with a small value.

  16. Retractions of the gingival margins evaluated by holographic methods

    NASA Astrophysics Data System (ADS)

    Sinescu, Cosmin; Negrutiu, Meda Lavinia; Manole, Marius; de Sabata, Aldo; Rusu, Laura-Cristina; Stratul, Stefan; Dudea, Diana; Dughir, Ciprian; Duma, Virgil-Florin

    2015-05-01

    The periodontal disease is one of the most common pathological states of the teeth and gums system. The issue is that its evaluation is a subjective one, i.e. it is based on the skills of the dental medical doctor. As for any clinical condition, a quantitative evaluation and monitoring in time of the retraction of the gingival margins is desired. This phenomenon was evaluated in this study with a holographic method by using a He-Ne laser with a power of 13 mW. The holographic system we have utilized - adapted for dentistry applications - is described. Several patients were considered in a comparative study of their state of health - regarding their oral cavity. The impressions of the maxillary dental arch were taken from a patient during his/her first visit and after a period of six months. The hologram of the first model was superposed on the model cast after the second visit. The retractions of the gingival margins could be thus evaluated three-dimensionally in every point of interest. An evaluation of the retraction has thus been made. Conclusions can thus be drawn for the clinical evaluation of the health of the teeth and gums system of each patient.

  17. Jagd nach dem O'Conell-Effekt

    NASA Astrophysics Data System (ADS)

    Reichmann, Norbert

    2013-03-01

    In the present paper, I focus on the O'Connell effect of the WUMa variable V502 Cyg, with the main aim of showing it in the lightcurve. 166 observations were collected in V and B band (100 and 66 measurements, respectively) from my private observatory in Kästenberg, Austria, Ossiacher Tauern, at an elevation of 890 m. All data were acquired with an Apo 130/1200 and an Apogee Alta U16M CCD camera. Photometric colour band and narrowband data were collected simultaneously and evaluated. The combination of photometric data with data for deep-sky imaging I have termed "pretty-picture-photometry". This combination of photometric measurements with colour and narrowband data is presented here in the case of V502 Cyg in its surrounding deep-sky field. Norbert Reichman is member of the BAV.

  18. Initial Results of an MDO Method Evaluation Study

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley MDO method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of re- producible experiments. In the first phase of the study, three MDO methods were implemented in the SIGHT: framework and used to solve a set of ten relatively simple problems. In this paper, we comment on the general considerations for conducting method evaluation studies and report some initial results obtained to date. In particular, although the results are not conclusive because of the small initial test set, other formulations, optimality conditions, and sensitivity of solutions to various perturbations. Optimization algorithms are used to solve a particular MDO formulation. It is then appropriate to speak of local convergence rates and of global convergence properties of an optimization algorithm applied to a specific formulation. An analogous distinction exists in the field of partial differential equations. On the one hand, equations are analyzed in terms of regularity, well-posedness, and the existence and unique- ness of solutions. On the other, one considers numerous algorithms for solving differential equations. The area of MDO methods studies MDO formulations combined with optimization algorithms, although at times the distinction is blurred. It is important to

  19. Evaluation of three 3D US calibration methods

    NASA Astrophysics Data System (ADS)

    Hummel, Johann; Kaar, Marcus; Hoffmann, Rainer; Bhatia, Amon; Birkfellner, Wolfgang; Figl, Michael

    2013-03-01

    With the introduction of 3D US image devices the demand for accurate and fast 3D calibration methods arose. We implemented three different calibration methods and compared the calibration results in terms of fiducial registration error (FRE) and target registration error (TRE). The three calibration methods included a multi-points phantom (MP), a feature based model (FM) and a membrane model (MM). With respect to the sphere method a simple point-to-point registration was applied. For the feature based model we employed a phantom consisting of spheres, pyramids and cones. These objects were imaged from different angles and a 3D3D registration was applied for all possible image combinations. The last method was accomplished by imaging a simple membrane which allows for calculation of the calibration matrix. For a first evaluation we computed the FRE for each method. To assess the calibration success on real patient data we used ten 3D3D registrations between images from the prostate. The FRE for the sphere method amounted to 1.40 mm, for the figure method to 1.05 mm and with respect to the membrane method to 1.12 mm. The deviation arising from ten 3D3D patient registration were 3.44 mm (MP), 2.93 mm (FM)and 2.84 mm (MM). The MM revealed to be the most accurate of the evaluated procedure while the MP has shown significant higher errors. The results from FM were close to the one from MM and also significant better than the one with the SM. Between FM and MM no significant difference was to detect.

  20. Comparison between evaluation methods from sun protection factors.

    PubMed

    Martini, M C

    1986-10-01

    Objective methods for evaluation of Sun Protection Factors (SPF) are numerous. Only the most used methods both in vitro and in vivo will be described. The results obtained with different types of spectrophotometric methods (solution, thin layer over quartz slides or measurement of transmittance and diffusion after coating with emulsions over the stratum corneum) show that only the last method, which involves an integration sphere, is able to give data in good correlation with in vivo Sun protection factors. Among in vivo methods, the animal of choice is the albino guinea pig, because of its sensitivity and erythemateous reactions similar to those of human skin. Nevertheless, this method is only reliable for product screening and true SPF values must be determined on humans. Two official methods, the American (FDA) and the German (DIN 67501). are described with advantages and disadvantages. In Fine, a new method which is a combination of these two methods is proposed. Twenty people are irradiated by a Xenon lamp which emits about 0.60 mw/cm(2) of UVB, 3.5 mw cm(-2) for UVA and IR, sufficient to obtain a temperature of 35 degrees C of the skin surface. The product is applied on the back of volunteers in quantity of 1 mg/cm(-2). Test zones have a surface of 2.25 cm(2). Irradiation begins 10 min after application of the product and the exposure times are increased from zone to zone following a geometric progression, with 1.25 as ratio. Two standard prepara- tions are used, one with SPF=4, the other with SPF=9-10. Erythema is evaluated visually 16 to 24 h after irradiation. Each SPF is determined using the classical ratio MED with sunscreenlMED without sunscreen and the geometrical mean is calculated to obtain the definitive value of SPF. PMID:19457219

  1. [New methods for the ambulatory evaluation of female infertility].

    PubMed

    Török, Péter; Major, Tamás

    2013-08-18

    Incidence of infertility increased in the past years and it affects 15% of couples. Female and male factors are responsible in 40% and 40% of the cases, respectively, while factors present in both females and males can be found in 20% of cases. Female factors can be further divided into organic and functional ones. Function of the female organs can be evaluated in an outpatient setting by well-developed laboratory techniques but evaluation of the uterine cavity and inspection of the tubal patency have been traditionally carried out in one-day surgery. However, the latter can be performed under ambulatory setting with the use of office hysteroscopy, so that the use of operating theatre and staff costs can be saved. Using selective pertubation for the evaluation of tubal patency via office hysteroscopy can reduce cost further. The new methods in infertility workup which can be performed in ambulatory setting have several advantages for the patients.

  2. Using hybrid method to evaluate the green performance in uncertainty.

    PubMed

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed. PMID:20571885

  3. Evaluation of therapeutic pulmonary surfactants by thin liquid film methods.

    PubMed

    Todorov, Roumen; Exerowa, Dotchi; Platikanov, Dimo; Bianco, Federico; Razzetti, Roberta

    2015-08-01

    An example of the application of the Black Foam Film (BFF) Method and the Wetting Film Method, using the Microinterferomertric and the Pressure Balance Techniques, for characterization interfacial properties of the animal derived therapeutic pulmonary surfactant preparations (TSP), is presented. BFF thickness, probability of black film formation, and disjoining pressure for foam films from TSP aqueous solutions are measured as well as the wetting properties of TSP solutions on solid surfaces with different hydrophobicity have been studied. Interfacial characteristics such as minimal surfactant concentration to obtain black film (critical concentration) and concentration at which a black film is 100% obtained (threshold concentration) are determined. An evaluation of the four widely used TSP – Curosurf, Infasurf, Survanta, and Alveofact – by these methods has been carried out. Thus the methods of the thin liquid films are useful tools for studying the interfacial properties of TSP solutions, as well as for their improvement.

  4. Laboratory-scale evaluations of alternative plutonium precipitation methods

    SciTech Connect

    Martella, L.L.; Saba, M.T.; Campbell, G.K.

    1984-02-08

    Plutonium(III), (IV), and (VI) carbonate; plutonium(III) fluoride; plutonium(III) and (IV) oxalate; and plutonium(IV) and (VI) hydroxide precipitation methods were evaluated for conversion of plutonium nitrate anion-exchange eluate to a solid, and compared with the current plutonium peroxide precipitation method used at Rocky Flats. Plutonium(III) and (IV) oxalate, plutonium(III) fluoride, and plutonium(IV) hydroxide precipitations were the most effective of the alternative conversion methods tested because of the larger particle-size formation, faster filtration rates, and the low plutonium loss to the filtrate. These were found to be as efficient as, and in some cases more efficient than, the peroxide method. 18 references, 14 figures, 3 tables.

  5. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  6. A Method for Evaluating Volt-VAR Optimization Field Demonstrations

    SciTech Connect

    Schneider, Kevin P.; Weaver, T. F.

    2014-08-31

    In a regulated business environment a utility must be able to validate that deployed technologies provide quantifiable benefits to the end-use customers. For traditional technologies there are well established procedures for determining what benefits will be derived from the deployment. But for many emerging technologies procedures for determining benefits are less clear and completely absent in some cases. Volt-VAR Optimization is a technology that is being deployed across the nation, but there are still numerous discussions about potential benefits and how they are achieved. This paper will present a method for the evaluation, and quantification of benefits, for field deployments of Volt-VAR Optimization technologies. In addition to the basic methodology, the paper will present a summary of results, and observations, from two separate Volt-VAR Optimization field evaluations using the proposed method.

  7. Evaluation of preferred lightness rescaling methods for colour reproduction

    NASA Astrophysics Data System (ADS)

    Chang, Yerin

    2012-01-01

    In cross-media colour reproduction, it is common goal achieving media-relative reproduction. From the ICC specification, this often accomplished by linearly scaling XYZ data so that the media white of the source data matches that of the destination data. However, in this approach the media black points are not explicitly aligned. To compensate this problem, it is common to apply a black point compensation (BPC) procedure to improve the mapping of the black points. First, three lightness rescaling methods were chosen: linear, sigmoidal and spline. CIECAM02 was also implemented in an approach of a lightness rescaling method; simply, lightness values from results produced by CIECAM02 handle as if reproduced lightness values of an output image. With a chosen image set, above five different methods were implemented. A paired-comparison psychophysical experiment was performed to evaluate performances of the lightness rescaling methods. In most images, the Adobe's BPC, linear and Spline lightness rescaling methods are preferred over the CIECAM02 and sigmoidal lightness rescaling methods. The confidence interval for the single image set is +/-0.36. With this confidence interval, it is difficult to conclude the Adobe BPC' method works better, but not significantly so. However, for the overall results, as every single observation is independent to each other, the result was presented with the confidence interval of +/-0.0763. Based on the overall result, the Adobe's BPC method performs best.

  8. Roles and methods of performance evaluation of hospital academic leadership.

    PubMed

    Zhou, Ying; Yuan, Huikang; Li, Yang; Zhao, Xia; Yi, Lihua

    2016-01-01

    The rapidly advancing implementation of public hospital reform urgently requires the identification and classification of a pool of exceptional medical specialists, corresponding with incentives to attract and retain them, providing a nucleus of distinguished expertise to ensure public hospital preeminence. This paper examines the significance of academic leadership, from a strategic management perspective, including various tools, methods and mechanisms used in the theory and practice of performance evaluation, and employed in the selection, training and appointment of academic leaders. Objective methods of assessing leadership performance are also provided for reference. PMID:27061556

  9. Semi-automatic method for routine evaluation of fibrinolytic components.

    PubMed

    Collen, D; Tytgat, G; Verstraete, M

    1968-11-01

    A semi-automatic method for the routine evaluation of fibrinolytic activity is described. The principle is based upon graphic recording by a multichannel voltmeter of tension drops over a potentiometer, caused by variations in the influence of light upon a light-dependent resistance, resulting from modifications in the composition of the fibrin fibres by lysis. The method is applied to the assessment of certain fibrinolytic factors with widespread fibrinolytic endpoints, and the results are compared with simultaneously obtained visual data on the plasmin assay, the plasminogen assay, and on the euglobulin clot lysis time.

  10. Surface oximetry. A new method to evaluate intestinal perfusion.

    PubMed

    Ferrara, J J; Dyess, D L; Lasecki, M; Kinsey, S; Donnell, C; Jurkovich, G J

    1988-01-01

    Accepted methods to evaluate intestinal vascularity intraoperatively include standard clinical criteria (SCC), doppler ultrasound (DUS), and intravenous fluorescein (FLF). A combination of methods is often used to overcome disadvantages of individual techniques. Assessment of intestinal vascularity by FLF was compared to SCC, DUS, and pulse oximetry (POX) in segments of intestine demonstrating arterial, venous and arteriovenous occlusion, to determine if POX might supplement the assessment of intestinal vascularity. POX uses a commercially available instrument to assess tissue oxygenation and arterial flow, and is rapid, reproducible, and noninvasive. POX appears to be a superior technique when compared to SCC and DUS.

  11. Roles and methods of performance evaluation of hospital academic leadership.

    PubMed

    Zhou, Ying; Yuan, Huikang; Li, Yang; Zhao, Xia; Yi, Lihua

    2016-01-01

    The rapidly advancing implementation of public hospital reform urgently requires the identification and classification of a pool of exceptional medical specialists, corresponding with incentives to attract and retain them, providing a nucleus of distinguished expertise to ensure public hospital preeminence. This paper examines the significance of academic leadership, from a strategic management perspective, including various tools, methods and mechanisms used in the theory and practice of performance evaluation, and employed in the selection, training and appointment of academic leaders. Objective methods of assessing leadership performance are also provided for reference.

  12. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  13. Comparing three methods for evaluating impact wrench vibration emissions.

    PubMed

    McDowell, Thomas W; Marcotte, Pierre; Warren, Cristopher; Welcome, Daniel E; Dong, Ren G

    2009-08-01

    To provide a means for comparing impact wrenches and similar tools, the international standard ISO 8662-7 prescribes a method for measuring the vibrations at the handles of tools during their operations against a cotton-phenolic braking device. To improve the standard, alternative loading mechanisms have been proposed; one device comprises aluminum blocks with friction brake linings, while another features plate-mounted bolts to provide the tool load. The objective of this study was to evaluate these three loading methods so that tool evaluators can select appropriate loading devices in order to obtain results that can be applied to their specific workplace operations. Six experienced tool operators used five tool models to evaluate the loading mechanisms. The results of this study indicate that different loads can yield different tool comparison results. However, any of the three devices appears to be adequate for initial tool screenings. On the other hand, vibration emissions measured in the laboratory are unlikely to be fully representative of those in the workplace. Therefore, for final tool selections and for reliably assessing workplace vibration exposures, vibration measurements should be collected under actual working conditions. Evaluators need to use appropriate numbers of tools and tool operators in their assessments; recommendations are provided.

  14. A method of thymic perfusion and its evaluation

    PubMed Central

    Ekwueme, O.

    1973-01-01

    The development and evaluation of a method of isolated ex vivo perfusion of the rabbit thymus using diluted autologous blood is described. The data indicate that the viability of the preparation is maintained at a satisfactory level during the period of perfusion. These results suggest that the isolated perfused thymus would be a useful new approach to studies of thymus function. ImagesFig. 2Fig. 8Fig. 9Fig. 10Fig. 11 PMID:4747584

  15. SBS vs Inhouse Recycling Methods-An Invitro Evaluation

    PubMed Central

    Verma, Jaya Krishanan; Arun; Sundari, Shanta; Chandrasekhar, Shyamala; Kumar, Aravind

    2015-01-01

    Introduction In today’s world of economic crisis it is not feasible for an orthodontist to replace each and every debonded bracket with a new bracket- quest for an alternative thrives Orthodontist. The concept of recycling bracket for its reuse has evolved over a period of time. Orthodontist can send the brackets to various commercial recycling companies for recycling, but it’s impractical as these are complex procedures and require time and usage of a new bracket would seem more feasible. Thereby, in-house methods have been developed. The aim of the study was to determine the SBS (Shear Bond Strength) and to compare, evaluate the efficiency of in house recycling methods with that of the SBS of new brackets. Materials and Methods Five in–house-recycling procedures-Adhesive Grinding Method, Sandblasting Method, Thermal Flaming Method, Buchman method and Acid Bath Method were used in the present study. Initial part of the study included the use of UV/Vis spectrophotometer where in the absorption level of base of new stainless steel bracket is compared with the base of a recycled bracket. The difference seen in the UV absorbance can be attributed to the presence of adhesive remnant. For each recycling procedure the difference in UV absorption is calculated. New stainless steel brackets and recycled brackets were tested for its shear bond strength with Instron testing machine. Comparisons were made between shear bond strength of new brackets with that of recycled brackets. The last part of the study involved correlating the findings of UV/Vis spectrophotometer with the shear bond strength for each recycling procedure. Results Among the recycled brackets the Sandblasting technique showed the highest shear bond strength (19.789MPa) and the least was shown by the Adhesive Grinding method (13.809MPa). Conclusion The study concludes that sand blasting can be an effective choice among the 5 in house methods of recycling methods. PMID:26501002

  16. Best estimate method versus evaluation method: a comparison of two techniques in evaluating seismic analysis and design. Technical report

    SciTech Connect

    Bumpus, S.E.; Johnson, J.J.; Smith, P.D.

    1980-07-01

    The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the tradditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC)--seismic input, soil-structure interaction, major structural response, and subsystem response--are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on the model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evauation Method is also demonstrated.

  17. BOREAS Regional DEM in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Knapp, David; Verdin, Kristine; Hall, Forrest G. (Editor)

    2000-01-01

    This data set is based on the GTOPO30 Digital Elevation Model (DEM) produced by the United States Geological Survey EROS Data Center (USGS EDC). The BOReal Ecosystem-Atmosphere Study (BOREAS) region (1,000 km x 1000 km) was extracted from the GTOPO30 data and reprojected by BOREAS staff into the Albers Equal-Area Conic (AEAC) projection. The pixel size of these data is 1 km. The data are stored in binary, image format files.

  18. Proper evaluation of alignment-free network comparison methods

    PubMed Central

    Milenković, Tijana; Pržulj, Nataša

    2015-01-01

    Motivation: Network comparison is a computationally intractable problem with important applications in systems biology and other domains. A key challenge is to properly quantify similarity between wiring patterns of two networks in an alignment-free fashion. Also, alignment-based methods exist that aim to identify an actual node mapping between networks and as such serve a different purpose. Various alignment-free methods that use different global network properties (e.g. degree distribution) have been proposed. Methods based on small local subgraphs called graphlets perform the best in the alignment-free network comparison task, due to high level of topological detail that graphlets can capture. Among different graphlet-based methods, Graphlet Correlation Distance (GCD) was shown to be the most accurate for comparing networks. Recently, a new graphlet-based method called NetDis was proposed, which was claimed to be superior. We argue against this, as the performance of NetDis was not properly evaluated to position it correctly among the other alignment-free methods. Results: We evaluate the performance of available alignment-free network comparison methods, including GCD and NetDis. We do this by measuring accuracy of each method (in a systematic precision-recall framework) in terms of how well the method can group (cluster) topologically similar networks. By testing this on both synthetic and real-world networks from different domains, we show that GCD remains the most accurate, noise-tolerant and computationally efficient alignment-free method. That is, we show that NetDis does not outperform the other methods, as originally claimed, while it is also computationally more expensive. Furthermore, since NetDis is dependent on the choice of a network null model (unlike the other graphlet-based methods), we show that its performance is highly sensitive to the choice of this parameter. Finally, we find that its performance is not independent on network sizes and

  19. SDO-AIA DEM: Initial Results

    NASA Astrophysics Data System (ADS)

    Schmelz, Joan T.

    2011-01-01

    The Atmospheric Imaging Assembly (AIA) aboard the Solar Dynamics Observatory has state-of-the-art spatial resolution and shows the most detailed images of coronal loops ever observed. The series of coronal filters peak at different temperatures, which span the range of active regions. These features represent a significant improvement over earlier coronal imagers and make AIA ideal for multi-thermal analysis. Here we targeted a 171-A coronal loop in AR 11092 observed by AIA on 2010 August 3. Isothermal analysis using the 171-to-193 ratio gave a temperature of Log T = 6.1, similar to the results of EIT and TRACE. Differential Emission Measure analysis, however, showed that the plasma was multithermal, not isothermal, with a distribution that peaked between Log T = 6.3 and 6.4. The result from the isothermal analysis, which is the average of the true plasma distribution weighted by the instrument response functions, appears to be deceptively low. These results have potentially serious implications: EIT and TRACE results, which use the same isothermal method, show substantially smaller temperature gradients than predicted by standard models for loops in hydrodynamic equilibrium and have been used as strong evidence in support of footpoint heating models. These implications may have to be re-examined in the wake of new results from AIA.

  20. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration. PMID:25399119

  1. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration.

  2. Evaluation of five decontamination methods for filtering facepiece respirators.

    PubMed

    Viscusi, Dennis J; Bergman, Michael S; Eimer, Benjamin C; Shaffer, Ronald E

    2009-11-01

    Concerns have been raised regarding the availability of National Institute for Occupational Safety and Health (NIOSH)-certified N95 filtering facepiece respirators (FFRs) during an influenza pandemic. One possible strategy to mitigate a respirator shortage is to reuse FFRs following a biological decontamination process to render infectious material on the FFR inactive. However, little data exist on the effects of decontamination methods on respirator integrity and performance. This study evaluated five decontamination methods [ultraviolet germicidal irradiation (UVGI), ethylene oxide, vaporized hydrogen peroxide (VHP), microwave oven irradiation, and bleach] using nine models of NIOSH-certified respirators (three models each of N95 FFRs, surgical N95 respirators, and P100 FFRs) to determine which methods should be considered for future research studies. Following treatment by each decontamination method, the FFRs were evaluated for changes in physical appearance, odor, and laboratory performance (filter aerosol penetration and filter airflow resistance). Additional experiments (dry heat laboratory oven exposures, off-gassing, and FFR hydrophobicity) were subsequently conducted to better understand material properties and possible health risks to the respirator user following decontamination. However, this study did not assess the efficiency of the decontamination methods to inactivate viable microorganisms. Microwave oven irradiation melted samples from two FFR models. The remainder of the FFR samples that had been decontaminated had expected levels of filter aerosol penetration and filter airflow resistance. The scent of bleach remained noticeable following overnight drying and low levels of chlorine gas were found to off-gas from bleach-decontaminated FFRs when rehydrated with deionized water. UVGI, ethylene oxide (EtO), and VHP were found to be the most promising decontamination methods; however, concerns remain about the throughput capabilities for EtO and VHP

  3. Evaluation of Five Decontamination Methods for Filtering Facepiece Respirators

    PubMed Central

    Bergman, Michael S.; Eimer, Benjamin C.; Shaffer, Ronald E.

    2009-01-01

    Concerns have been raised regarding the availability of National Institute for Occupational Safety and Health (NIOSH)-certified N95 filtering facepiece respirators (FFRs) during an influenza pandemic. One possible strategy to mitigate a respirator shortage is to reuse FFRs following a biological decontamination process to render infectious material on the FFR inactive. However, little data exist on the effects of decontamination methods on respirator integrity and performance. This study evaluated five decontamination methods [ultraviolet germicidal irradiation (UVGI), ethylene oxide, vaporized hydrogen peroxide (VHP), microwave oven irradiation, and bleach] using nine models of NIOSH-certified respirators (three models each of N95 FFRs, surgical N95 respirators, and P100 FFRs) to determine which methods should be considered for future research studies. Following treatment by each decontamination method, the FFRs were evaluated for changes in physical appearance, odor, and laboratory performance (filter aerosol penetration and filter airflow resistance). Additional experiments (dry heat laboratory oven exposures, off-gassing, and FFR hydrophobicity) were subsequently conducted to better understand material properties and possible health risks to the respirator user following decontamination. However, this study did not assess the efficiency of the decontamination methods to inactivate viable microorganisms. Microwave oven irradiation melted samples from two FFR models. The remainder of the FFR samples that had been decontaminated had expected levels of filter aerosol penetration and filter airflow resistance. The scent of bleach remained noticeable following overnight drying and low levels of chlorine gas were found to off-gas from bleach-decontaminated FFRs when rehydrated with deionized water. UVGI, ethylene oxide (EtO), and VHP were found to be the most promising decontamination methods; however, concerns remain about the throughput capabilities for EtO and VHP

  4. Interpolation and elevation errors: the impact of the DEM resolution

    NASA Astrophysics Data System (ADS)

    Achilleos, Georgios A.

    2015-06-01

    Digital Elevation Models (DEMs) are developing and evolving at a fast pace, given the progress of computer science and technology. This development though, is not accompanied by an advancement of knowledge on the quality of the models and their indigenous inaccuracy. The user on most occasions is not aware of this quality thus in not aware of the correlating product uncertainty. Extensive research has been conducted - and still is - towards this direction. In the research presented in this paper there is an analysis of elevation errors behavior which are recorded in a DEM. The behavior of these elevation errors, is caused by altering the DEM resolution upon the application of the algorithm interpolation. Contour lines are used as entry data from a topographical map. Elevation errors are calculated in the positions of the initial entry data and wherever the elevation is known. The elevation errors that are recorded, are analyzed, in order to reach conclusions about their distribution and the way in which they occur.

  5. Evaluation of DNA extraction methods from complex phototrophic biofilms.

    PubMed

    Ferrera, Isabel; Massana, Ramon; Balagué, Vanessa; Pedrós-Alió, Carles; Sánchez, Olga; Mas, Jordi

    2010-04-01

    Phototrophic biofilms are used in a variety of biotechnological and industrial processes. Understanding their structure, ie microbial composition, is a necessary step for understanding their function and, ultimately, for the success of their application. DNA analysis methods can be used to obtain information on the taxonomic composition and relative abundance of the biofilm members. The potential bias introduced by DNA extraction methods in the study of the diversity of a complex phototrophic sulfide-oxidizing biofilm was examined. The efficiency of eight different DNA extraction methods combining physical, mechanical and chemical procedures was assessed. Methods were compared in terms of extraction efficiency, measured by DNA quantification, and detectable diversity (16S rRNA genes recovered), evaluated by denaturing gradient gel electrophoresis (DGGE). Significant differences were found in DNA yields ranging from 116 +/- 12 to 1893 +/- 96 ng of DNA. The different DGGE fingerprints ranged from 7 to 12 bands. Methods including phenol-chloroform extraction after enzymatic lysis resulted in the greatest DNA yields and detectable diversity. Additionally, two methods showing similar yields and retrieved diversity were compared by cloning and sequencing. Clones belonging to members of the Alpha-, Beta- and Gamma- proteobacteria, Bacteroidetes, Cyanobacteria and to the Firmicutes were recovered from both libraries. However, when bead-beating was applied, clones belonging to the Deltaproteobacteria were also recovered, as well as plastid signatures. Phenol-chloroform extraction after bead-beating and enzymatic lysis was therefore considered to be the most suitable method for DNA extraction from such highly diverse phototrophic biofilms.

  6. [Evaluation in the health sector: concepts and methods].

    PubMed

    Contandriopoulos, A P; Champagne, F; Denis, J L; Avargues, M C

    2000-12-01

    what is good and right). Did the intervention correspond to what should have been done according to the standards utilized? Evaluative research aims to employ valid scientific methods to analyze relationships between different components of an intervention. More specifically, evaluation research can be classified into six types of analysis, which employ different research strategies. Strategic analysis allows appreciation of the pertinence of an intervention; logical analysis, the soundness of the theoretical and operational bases of the intervention; productivity analysis, the technical efficiency with which resources are mobilized to produce goods or services; analysis of effects, effectiveness of goods and services in producing results; efficiency analysis, relations between the costs of the resources (or the services) used and the results; implementation analysis, appreciation of interactions between the process of the intervention and the context of implementation in the production of effects. The official finalities of all evaluation processes are of four types: (1)strategic, to aid the planning and development of an intervention, (2) formative, to supply information to improve an intervention in progress, (3) summative, to determine the effects of an intervention (to decide if it should be maintained, transformed or suspended), (4) fundamental, to contribute to the advancement of empirical and theoretical knowledge regarding the intervention. In addition, experience acquired in the field of evaluation suggests that evaluation is also productive in that it allows actors, in an organized setting, to reconsider the links between the objectives given, practices developed and their context of action. This task of achieving coherence is continuous and is one of the intrinsic conditions of action in an organized setting. In this perspective, evaluation can have a key role, given that it is not employed to legitimize new forms of control but rather to favor debate and

  7. A FEM-DEM technique for studying the motion of particles in non-Newtonian fluids. Application to the transport of drill cuttings in wellbores

    NASA Astrophysics Data System (ADS)

    Celigueta, Miguel Angel; Deshpande, Kedar M.; Latorre, Salvador; Oñate, Eugenio

    2016-04-01

    We present a procedure for coupling the finite element method (FEM) and the discrete element method (DEM) for analysis of the motion of particles in non-Newtonian fluids. Particles are assumed to be spherical and immersed in the fluid mesh. A new method for computing the drag force on the particles in a non-Newtonian fluid is presented. A drag force correction for non-spherical particles is proposed. The FEM-DEM coupling procedure is explained for Eulerian and Lagrangian flows, and the basic expressions of the discretized solution algorithm are given. The usefulness of the FEM-DEM technique is demonstrated in its application to the transport of drill cuttings in wellbores.

  8. DEM simulation of growth normal fault slip

    NASA Astrophysics Data System (ADS)

    Chu, Sheng-Shin; Lin, Ming-Lang; Nien, Wie-Tung; Chan, Pei-Chen

    2014-05-01

    Slip of the fault can cause deformation of shallower soil layers and lead to the destruction of infrastructures. Shanchiao fault on the west side of the Taipei basin is categorized. The activities of Shanchiao fault will cause the quaternary sediments underneath the Taipei basin to become deformed. This will cause damage to structures, traffic construction, and utility lines within the area. It is determined from data of geological drilling and dating, Shanchiao fault has growth fault. In experiment, a sand box model was built with non-cohesive sand soil to simulate the existence of growth fault in Shanchiao Fault and forecast the effect on scope of shear band development and ground differential deformation. The results of the experiment showed that when a normal fault containing growth fault, at the offset of base rock the shear band will develop upward along with the weak side of shear band of the original topped soil layer, and this shear band will develop to surface much faster than that of single top layer. The offset ratio (basement slip / lower top soil thickness) required is only about 1/3 of that of single cover soil layer. In this research, it is tried to conduct numerical simulation of sand box experiment with a Discrete Element Method program, PFC2D, to simulate the upper covering sand layer shear band development pace and scope of normal growth fault slip. Results of simulation indicated, it is very close to the outcome of sand box experiment. It can be extended to application in water pipeline project design around fault zone in the future. Keywords: Taipei Basin, Shanchiao fault, growth fault, PFC2D

  9. [Imputation methods for missing data in educational diagnostic evaluation].

    PubMed

    Fernández-Alonso, Rubén; Suárez-Álvarez, Javier; Muñiz, José

    2012-02-01

    In the diagnostic evaluation of educational systems, self-reports are commonly used to collect data, both cognitive and orectic. For various reasons, in these self-reports, some of the students' data are frequently missing. The main goal of this research is to compare the performance of different imputation methods for missing data in the context of the evaluation of educational systems. On an empirical database of 5,000 subjects, 72 conditions were simulated: three levels of missing data, three types of loss mechanisms, and eight methods of imputation. The levels of missing data were 5%, 10%, and 20%. The loss mechanisms were set at: Missing completely at random, moderately conditioned, and strongly conditioned. The eight imputation methods used were: listwise deletion, replacement by the mean of the scale, by the item mean, the subject mean, the corrected subject mean, multiple regression, and Expectation-Maximization (EM) algorithm, with and without auxiliary variables. The results indicate that the recovery of the data is more accurate when using an appropriate combination of different methods of recovering lost data. When a case is incomplete, the mean of the subject works very well, whereas for completely lost data, multiple imputation with the EM algorithm is recommended. The use of this combination is especially recommended when data loss is greater and its loss mechanism is more conditioned. Lastly, the results are discussed, and some future lines of research are analyzed.

  10. Lunar-base construction equipment and methods evaluation

    NASA Astrophysics Data System (ADS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-07-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  11. [Preliminary study on pharmacodynamic evaluation method of Houpo formula particles].

    PubMed

    Ma, Lu; Shao, Li-Jie; Tang, Fang

    2014-04-01

    To discuss the feasibility of the pharmacodynamic evaluation method for traditional Chinese medicine (TCM) formula particles, with traditional decoction for reference and the intervention of Magnoliae Officinalis Cortex in rats with ulcerative colitis (UC). First of all, the similarity of traditional Magnoliae Officinalis Cortex decoction and formula particles of different manufacturers was defined by using the IR fingerprint. The UC rat model was established and given Houpo formula particles of different doses and manufacturers, with the decoction for reference, in order to observe disease activity index (DAI), colon mucosa damage index (CMDI), pathologic changes, nitric oxide (NO), endothdin (ET), substance P, vasoactive intestinal peptide (VIP). Their intervention effects on UC rats were compared to study the difference between Sanjiu and Tianjiang Houpo formula particles, in order to demonstrate the feasibility of the pharmacodynamic evaluation method for Houpo formula particles. According to the results, Houpo formula particles showed similar pharmacodynamic actions with the traditional decoction. The pharmacodynamic comparison of Houpo formula particles of different manufacturers showed no statistical significance. The experiment showed that on the basis of the TCM compounds, a prescription dismantlement study was conducted to define target points of various drugs. The traditional decoction was selected for reference in the comparison of corresponding formula particles for their pharmacodynamic equivalence. This method could avoid controversies about single or combined boiling of formula particles, and give objective comments on the pharmacodynamic effect of the formula particles. The method is proved to be feasible. PMID:25039188

  12. A Method of Evaluating Atmospheric Models Using Tracer Measurements.

    NASA Astrophysics Data System (ADS)

    Korain, Darko; Frye, James; Isakov, Vlad

    2000-02-01

    The authors have developed a method that uses tracer measurements as the basis for comparing and evaluating wind fields. An important advantage of the method is that the wind fields are evaluated from the tracer measurements without introducing dispersion calculations. The method can be applied to wind fields predicted by different atmospheric models or to wind fields obtained from interpolation and extrapolation of measured data. The method uses a cost function to quantify the success of wind fields in representing tracer transport. A cost function, `tracer potential,' is defined to account for the magnitude of the tracer concentration at the tracer receptors and the separation between each segment of a trajectory representing wind field transport and each of the tracer receptors. The tracer potential resembles a general expression for a physical potential because the success of a wind field trajectory is directly proportional to the magnitude of the tracer concentration and inversely proportional to its distance from this concentration. A reference tracer potential is required to evaluate the relative success of the wind fields and is defined by the initial location of any trajectory at the source. Then the method is used to calculate continuously the tracer potential along each trajectory as determined by the wind fields in time and space. Increased potential relative to the reference potential along the trajectory indicates good performance of the wind fields and vice versa. If there is sufficient spatial coverage of near and far receptors around the source, then the net tracer potential area can be used to infer the overall success of the wind fields. If there are mainly near-source receptors, then the positive tracer potential area should be used. If the vertical velocity of the wind fields is not available, then the success of the wind fields can be estimated from the vertically integrated area under the tracer potential curve. A trajectory with a maximum

  13. Bayesian Monte Carlo method for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  14. A practical method to evaluate radiofrequency exposure of mast workers.

    PubMed

    Alanko, Tommi; Hietanen, Maila

    2008-01-01

    Assessment of occupational exposure to radiofrequency (RF) fields in telecommunication transmitter masts is a challenging task. For conventional field strength measurements using manually operated instruments, it is difficult to document the locations of measurements while climbing up a mast. Logging RF dosemeters worn by the workers, on the other hand, do not give any information about the location of the exposure. In this study, a practical method was developed and applied to assess mast workers' exposure to RF fields and the corresponding location. This method uses a logging dosemeter for personal RF exposure evaluation and two logging barometers to determine the corresponding height of the worker's position on the mast. The procedure is not intended to be used for compliance assessments, but to indicate locations where stricter assessments are needed. The applicability of the method is demonstrated by making measurements in a TV and radio transmitting mast.

  15. Evaluation of DNA extraction methods of rumen microbial populations.

    PubMed

    Villegas-Rivera, Gabriela; Vargas-Cabrera, Yevani; González-Silva, Napoleón; Aguilera-García, Florentino; Gutiérrez-Vázquez, Ernestina; Bravo-Patiño, Alejandro; Cajero-Juárez, Marcos; Baizabal-Aguirre, Víctor Manuel; Valdez-Alarcón, Juan José

    2013-02-01

    The dynamism of microbial populations in the rumen has been studied with molecular methods that analyze single nucleotide polymorphisms of ribosomal RNA gene fragments (rDNA). Therefore DNA of good quality is needed for this kind of analysis. In this work we report the evaluation of four DNA extraction protocols (mechanical lysis or chemical lysis with CTAB, ethylxanthogenate or DNAzol(®)) from ruminal fluid. The suitability of two of these protocols (mechanical lysis and DNAzol(®)) was tested on single-strand conformation polymorphism (SSCP) of rDNA of rumen microbial populations. DNAzol(®) was a simple method that rendered good integrity, yield and purity. With this method, subtle changes in protozoan populations were detected in young bulls fed with slightly different formulations of a supplement of multinutritional blocks of molasses and urea. Sequences related to Eudiplodinium maggi and a non-cultured Entodiniomorphid similar to Entodinium caudatum, were related to major fluctuating populations in an SSCP assay.

  16. Evaluation of Methods to Predict Reactivity of Gold Nanoparticles

    SciTech Connect

    Allison, Thomas C.; Tong, Yu ye J.

    2011-06-20

    Several methods have appeared in the literature for predicting reactivity on metallic surfaces and on the surface of metallic nanoparticles. All of these methods have some relationship to the concept of frontier molecular orbital theory. The d-band theory of Hammer and Nørskov is perhaps the most widely used predictor of reactivity on metallic surfaces, and it has been successfully applied in many cases. Use of the Fukui function and the condensed Fukui function is well established in organic chemistry, but has not been so widely applied in predicting the reactivity of metallic nanoclusters. In this article, we will evaluate the usefulness of the condensed Fukui function in predicting the reactivity of a family of cubo-octahedral gold nanoparticles and make comparison with the d-band method.

  17. Evaluation of Two Fractal Methods for Magnetogram Image Analysis

    NASA Technical Reports Server (NTRS)

    Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.

    1997-01-01

    Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.

  18. Analysis of Photovoltaic System Energy Performance Evaluation Method

    SciTech Connect

    Kurtz, S.; Newmiller, J.; Kimber, A.; Flottemesch, R.; Riley, E.; Dierauf, T.; McKee, J.; Krishnani, P.

    2013-11-01

    Documentation of the energy yield of a large photovoltaic (PV) system over a substantial period can be useful to measure a performance guarantee, as an assessment of the health of the system, for verification of a performance model to then be applied to a new system, or for a variety of other purposes. Although the measurement of this performance metric might appear to be straight forward, there are a number of subtleties associated with variations in weather and imperfect data collection that complicate the determination and data analysis. A performance assessment is most valuable when it is completed with a very low uncertainty and when the subtleties are systematically addressed, yet currently no standard exists to guide this process. This report summarizes a draft methodology for an Energy Performance Evaluation Method, the philosophy behind the draft method, and the lessons that were learned by implementing the method.

  19. Evaluation of Alternate Stainless Steel Surface Passivation Methods

    SciTech Connect

    Clark, Elliot A.

    2005-05-31

    Stainless steel containers were assembled from parts passivated by four commercial vendors using three passivation methods. The performance of these containers in storing hydrogen isotope mixtures was evaluated by monitoring the composition of initially 50% H{sub 2} 50% D{sub 2} gas with time using mass spectroscopy. Commercial passivation by electropolishing appears to result in surfaces that do not catalyze hydrogen isotope exchange. This method of surface passivation shows promise for tritium service, and should be studied further and considered for use. On the other hand, nitric acid passivation and citric acid passivation may not result in surfaces that do not catalyze the isotope exchange reaction H{sub 2} + D{sub 2} {yields} 2HD. These methods should not be considered to replace the proprietary passivation processes of the two current vendors used at the Savannah River Site Tritium Facility.

  20. A practical method to evaluate radiofrequency exposure of mast workers.

    PubMed

    Alanko, Tommi; Hietanen, Maila

    2008-01-01

    Assessment of occupational exposure to radiofrequency (RF) fields in telecommunication transmitter masts is a challenging task. For conventional field strength measurements using manually operated instruments, it is difficult to document the locations of measurements while climbing up a mast. Logging RF dosemeters worn by the workers, on the other hand, do not give any information about the location of the exposure. In this study, a practical method was developed and applied to assess mast workers' exposure to RF fields and the corresponding location. This method uses a logging dosemeter for personal RF exposure evaluation and two logging barometers to determine the corresponding height of the worker's position on the mast. The procedure is not intended to be used for compliance assessments, but to indicate locations where stricter assessments are needed. The applicability of the method is demonstrated by making measurements in a TV and radio transmitting mast. PMID:19054796

  1. Temporal monitoring of Bardarbunga volcanic activity with TanDEM-X

    NASA Astrophysics Data System (ADS)

    Rossi, C.; Minet, C.; Fritz, T.; Eineder, M.; Erten, E.

    2015-12-01

    On August 29, 2014, a volcanic activity started in the lava field of Holuhraun, at the north east of the Bardarbunga caldera in Iceland. The activity was declared finished on February 27, 2015, thus lasting for about 6 months. During these months the magma chamber below the caldera slowly emptied, causing the rare event of caldera collapse. In this scenario, TanDEM-X remote sensing data is of particular interest. By producing high-resolution and accurate elevation models of the caldera, it is possible to evaluate volume losses and topographical changes useful to increase the knowledge about the volcanic activity dynamics. 5 TanDEM-X InSAR acquisitions have been commanded between August 01, 2014 and November 08, 2014. 2 acquisitions have been commanded before the eruption and 3 acquisitions afterwards. To fully cover the volcanic activity, also the lava flow area at the north-west of the caldera has been monitored and a couple of acquisitions have been employed to reveal the subglacial graben structure and the lava path. In this context, the expected elevation accuracy is studied on two levels. Absolute height accuracy is analyzed by inspecting the signal propagation at X-band in the imaged medium. Relative height accuracy is analyzed by investigating the InSAR system parameters and the local geomorphology. It is shown how the system is very well accurate with mean height errors below the meter. Moreover, neither InSAR processing issues, e.g. phase unwrapping errors, nor complex DEM calibration aspects are problems to tackle. Caldera is imaged in its entirety and new cauldron formations and, in general, the complete restructuring of the glacial volcanic system is well represented. An impressive caldera volume loss of about 1 billion cubic meters is measured in about two months. The dyke propagation from the Bardarbunga cauldron to the Holuhraun lava field is also revealed and a graben structure with a width of up to 1 km and a sinking of a few meters is derived

  2. Childhood Obesity Research Demonstration Project: Cross-Site Evaluation Methods

    PubMed Central

    Lee, Rebecca E.; Mehta, Paras; Thompson, Debbe; Bhargava, Alok; Carlson, Coleen; Kao, Dennis; Layne, Charles S.; Ledoux, Tracey; O'Connor, Teresia; Rifai, Hanadi; Gulley, Lauren; Hallett, Allen M.; Kudia, Ousswa; Joseph, Sitara; Modelska, Maria; Ortega, Dana; Parker, Nathan; Stevens, Andria

    2015-01-01

    Abstract Introduction: The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which the CORD model is associated with changes in behavior, body weight, BMI, quality of life, and healthcare satisfaction in children 2–12 years of age. Design/Methods: The CORD Evaluation Center (EC-CORD) will analyze the pooled data from three independent demonstration projects that each integrate public health and primary care childhood obesity interventions. An extensive set of common measures at the family, facility, and community levels were defined by consensus among the CORD projects and EC-CORD. Process evaluation will assess reach, dose delivered, and fidelity of intervention components. Impact evaluation will use a mixed linear models approach to account for heterogeneity among project-site populations and interventions. Sustainability evaluation will assess the potential for replicability, continuation of benefits beyond the funding period, institutionalization of the intervention activities, and community capacity to support ongoing program delivery. Finally, cost analyses will assess how much benefit can potentially be gained per dollar invested in programs based on the CORD model. Conclusions: The keys to combining and analyzing data across multiple projects include the CORD model framework and common measures for the behavioral and health outcomes along with important covariates at the individual, setting, and community levels. The overall objective of the comprehensive evaluation will develop evidence-based recommendations for replicating and disseminating community-wide, integrated public health and primary care programs based on the CORD model. PMID:25679060

  3. Feedback module for evaluating optical-power stabilization methods

    NASA Astrophysics Data System (ADS)

    Downing, John

    2016-03-01

    A feedback module for evaluating the efficacy of optical-power stabilization without thermoelectric coolers (TECs) is described. The module comprises a pickoff optic for sampling a light beam, a photodiode for converting the sample power to electrical current, and a temperature sensor. The components are mounted on an optical bench that makes accurate (0.05°) beam alignment practical as well as providing high thermal-conductivity among the components. The module can be mounted on existing light sources or the components can be incorporated in new designs. Evaluations of optical and electronic stabilization methods are also reported. The optical method combines a novel, weakly reflective, weakly polarizing coating on the pickoff optic with a photodiode and an automatic-power-control (APC) circuit in a closed loop. The shift of emitter wavelength with temperature, coupled with the wavelength-dependent reflectance of the pickoff optic, enable the APC circuit to compensate for temperature errors. In the electronic method, a mixed-signal processor in a quasiclosed loop generates a control signal from temperature and photocurrent inputs and feeds it back to an APC circuit to compensate for temperature errors. These methods result in temperature coefficients less than 20 ppm/°C and relative rms power equal to 05% for the optical method and 0.02% for the electronic method. The later value represents an order of magnitude improvement over rms specifications for cooled, laser-diode modules and a five-fold improvement in wall-plug efficiency is achieved by eliminating TECs.

  4. Performance evaluation of fault detection methods for wastewater treatment processes.

    PubMed

    Corominas, Lluís; Villez, Kris; Aguado, Daniel; Rieger, Leiv; Rosén, Christian; Vanrolleghem, Peter A

    2011-02-01

    Several methods to detect faults have been developed in various fields, mainly in chemical and process engineering. However, minimal practical guidelines exist for their selection and application. This work presents an index that allows for evaluating monitoring and diagnosis performance of fault detection methods, which takes into account several characteristics, such as false alarms, false acceptance, and undesirable switching from correct detection to non-detection during a fault event. The usefulness of the index to process engineering is demonstrated first by application to a simple example. Then, it is used to compare five univariate fault detection methods (Shewhart, EWMA, and residuals of EWMA) applied to the simulated results of the Benchmark Simulation Model No. 1 long-term (BSM1_LT). The BSM1_LT, provided by the IWA Task Group on Benchmarking of Control Strategies, is a simulation platform that allows for creating sensor and actuator faults and process disturbances in a wastewater treatment plant. The results from the method comparison using BSM1_LT show better performance to detect a sensor measurement shift for adaptive methods (residuals of EWMA) and when monitoring the actuator signals in a control loop (e.g., airflow). Overall, the proposed index is able to screen fault detection methods.

  5. Evaluation of Perrhenate Spectrophotometric Methods in Bicarbonate and Nitrate Media.

    PubMed

    Lenell, Brian A; Arai, Yuji

    2016-04-01

    2-pyridyl thiourea and methyl-2-pyridyl ketoxime based perrhenate, Re(VII), UV-vis spectrophotometric methods were evaluated in nitrate and bicarbonate solutions ranging from 0.001 M to 0.5 M. Standard curves at [Re]=2.5-50 mg L(-1) for the Re(IV)-thiourea and the Re ketoxime complexes were constructed at 405 nm and 490 nm, respectively. Detection of limits for N-(2-pyridyl) thiourea and methyl-2-pyridyl ketoxime methods in ultrapure water are 3.06 mg/L and 4.03 mg/L, respectively. Influences of NaHCO3 and NaNO3 concentration on absorbance spectra, absorptivity, and linearity were documented. For both methods, samples in ultrapure water and NaHCO3 have an R(2) value>0.99, indicating strong linear relationships. Statistical analysis supports that NaHCO3 does not affect linearity between standards for either method. NaNO3 causes major interference with the ketoxime method above 0.001 M NaNO3. Data provides information for practical use of Re spectrophotometric methods in environmental media that is high in bicarbonate and nitrate. PMID:26838460

  6. Methods to monitor and evaluate household waste prevention.

    PubMed

    Sharp, Veronica; Giorgi, Sara; Wilson, David C

    2010-03-01

    This paper presents one strand of the findings from a comprehensive synthesis review of the policy-relevant evidence on household waste prevention. The focus herein is on how to measure waste prevention: it is always difficult to measure what is not there. Yet reliable and robust monitoring and evaluation of household waste prevention interventions is essential, to enable policy makers, local authorities and practitioners to: (a) collect robust and high quality data; (b) ensure robust decisions are made about where to prioritize resources; and (c) ensure that waste prevention initiatives are being effective and delivering behaviour change. The evidence reveals a range of methods for monitoring and evaluation, including self-weighing; pre- and post-intervention surveys, focusing on attitudes and behaviours and/or on participation rates; tracking waste arisings via collection data and/or compositional analysis; and estimation/modelling. There appears to be an emerging consensus that no single approach is sufficient on its own, rather a 'hybrid' method using a suite of monitoring approaches - usually including surveys, waste tonnage data and monitoring of campaigns - is recommended. The evidence concurs that there is no benefit in trying to further collate evidence from past waste prevention projects, other than to establish, in a few selected cases, if waste prevention behaviour has been sustained beyond cessation of the active intervention campaign. A more promising way forward is to ensure that new intervention campaigns are properly evaluated and that the evidence is captured and collated into a common resource.

  7. A method to evaluate hydraulic fracture using proppant detection.

    PubMed

    Liu, Juntao; Zhang, Feng; Gardner, Robin P; Hou, Guojing; Zhang, Quanying; Li, Hu

    2015-11-01

    Accurate determination of the proppant placement and propped fracture height are important for evaluating and optimizing stimulation strategies. A technology using non-radioactive proppant and a pulsed neutron gamma energy spectra logging tool to determine the placement and height of propped fractures is proposed. Gd2O3 was incorporated into ceramic proppant and a Monte Carlo method was utilized to build the logging tools and formation models. Characteristic responses of the recorded information of different logging tools to fracture widths, proppant concentrations and influencing factors were studied. The results show that Gd capture gamma rays can be used to evaluate propped fractures and it has higher sensitivity to the change of fracture width and traceable proppant content compared with the exiting non-radioactive proppant evaluation techniques and only an after-fracture measurement is needed for the new method; The changes in gas saturation and borehole size have a great impact on determining propped fractures when compensated neutron and pulsed neutron capture tool are used. A field example is presented to validate the application of the new technique.

  8. Simple method for performance evaluation of multistage rockets

    NASA Astrophysics Data System (ADS)

    Pontani, Mauro; Teofilatto, Paolo

    2014-01-01

    Multistage rockets are commonly employed to place spacecraft and satellites in their operational orbits. Performance evaluation of multistage rockets is aimed at defining the maximum payload mass at orbit injection, for specified structural, propulsive, and aerodynamic data of the launch vehicle. This work proposes a simple method for a fast performance evaluation of multistage rockets. The technique at hand is based on three steps: (i) the flight-path angle at each stage separation is guessed, (ii) the spacecraft velocity is maximized at the first and second stage separation, and (iii) for the last stage the thrust direction is obtained through the particle swarm optimization technique, in conjunction with the use of the Euler-Lagrange equations and the Pontryagin minimum principle. The coast duration at the second stage separation is optimized as well. The method at hand is extremely simple and easy-to-implement, but nevertheless it proves to be capable of yielding near-optimal ascending trajectories for a multistage launch vehicle with realistic structural, propulsive, and aerodynamic characteristics. The solutions found with the technique under consideration can be employed either for a rapid evaluation of the multistage rocket performance or as guesses for more refined optimization algorithms.

  9. Methods to monitor and evaluate household waste prevention.

    PubMed

    Sharp, Veronica; Giorgi, Sara; Wilson, David C

    2010-03-01

    This paper presents one strand of the findings from a comprehensive synthesis review of the policy-relevant evidence on household waste prevention. The focus herein is on how to measure waste prevention: it is always difficult to measure what is not there. Yet reliable and robust monitoring and evaluation of household waste prevention interventions is essential, to enable policy makers, local authorities and practitioners to: (a) collect robust and high quality data; (b) ensure robust decisions are made about where to prioritize resources; and (c) ensure that waste prevention initiatives are being effective and delivering behaviour change. The evidence reveals a range of methods for monitoring and evaluation, including self-weighing; pre- and post-intervention surveys, focusing on attitudes and behaviours and/or on participation rates; tracking waste arisings via collection data and/or compositional analysis; and estimation/modelling. There appears to be an emerging consensus that no single approach is sufficient on its own, rather a 'hybrid' method using a suite of monitoring approaches - usually including surveys, waste tonnage data and monitoring of campaigns - is recommended. The evidence concurs that there is no benefit in trying to further collate evidence from past waste prevention projects, other than to establish, in a few selected cases, if waste prevention behaviour has been sustained beyond cessation of the active intervention campaign. A more promising way forward is to ensure that new intervention campaigns are properly evaluated and that the evidence is captured and collated into a common resource. PMID:20215493

  10. A method to evaluate hydraulic fracture using proppant detection.

    PubMed

    Liu, Juntao; Zhang, Feng; Gardner, Robin P; Hou, Guojing; Zhang, Quanying; Li, Hu

    2015-11-01

    Accurate determination of the proppant placement and propped fracture height are important for evaluating and optimizing stimulation strategies. A technology using non-radioactive proppant and a pulsed neutron gamma energy spectra logging tool to determine the placement and height of propped fractures is proposed. Gd2O3 was incorporated into ceramic proppant and a Monte Carlo method was utilized to build the logging tools and formation models. Characteristic responses of the recorded information of different logging tools to fracture widths, proppant concentrations and influencing factors were studied. The results show that Gd capture gamma rays can be used to evaluate propped fractures and it has higher sensitivity to the change of fracture width and traceable proppant content compared with the exiting non-radioactive proppant evaluation techniques and only an after-fracture measurement is needed for the new method; The changes in gas saturation and borehole size have a great impact on determining propped fractures when compensated neutron and pulsed neutron capture tool are used. A field example is presented to validate the application of the new technique. PMID:26296059

  11. A NEW INSAR DERIVED DEM OF BLACK RAPIDS GLACIER

    NASA Astrophysics Data System (ADS)

    Shugar, D. H.; Rabus, B.; Clague, J. J.

    2009-12-01

    We have constructed a new digital elevation model representing the 1995 surface of surge-type Black Rapids Glacier and the surrounding central Alaska Range, using ERS-1/2 repeat-pass interferometry. First, we isolated the topographic phase from three interferograms with contrasting perpendicular baselines. Next we attempted to automatically unwrap this topographic phase but encountered numerous errors due to the terrain containing areas of poor coherence from fringe aliasing, radar layover or shadow. We then consistently corrected these persistent phase-unwrapping errors in all three interferograms using an iterative semi-automated approach that capitalizes on the multi-baseline nature of the data set. Over the surface of Black Rapids Glacier, the accuracy of the new DEM is estimated at better than +/- 12 m. Ground-surveyed spot elevations from 1995 corroborate this accuracy estimate. Comparison of the new DEM with a 1951 U.S. Geological Survey topographic map, and with ground survey data from other years, shows the gradual return of Black Rapids Glacier to pre-surge conditions. In the 44-year period between 1951 and 1995 the observed average steepening of the longitudinal profile is ~0.6°. The maximum elevation changes in the ablation and accumulation zones are -256 m and +75 m, respectively, suggesting corresponding average rates of elevation change of about -5.8 m/yr and +1.7 m/yr. These rates are 1.5-2 times higher than those indicated by the ground survey spot elevation measurements over the period 1975 to 2005. Considering the significant overlap of the two periods of measurement, the inferred average rates for 1951-1975 would have to be very large (-7.5 m/yr and +2.3 m/yr, respectively) for these two findings to be consistent. A second comparison with the recently released ASTER G-DEM (data from 2001) led to no glaciologically usable results due to major artifacts in the ASTER G-DEM. We therefore conclude that the 1951 U.S. Geological Survey map and the

  12. Analysis and Validation of Grid dem Generation Based on Gaussian Markov Random Field

    NASA Astrophysics Data System (ADS)

    Aguilar, F. J.; Aguilar, M. A.; Blanco, J. L.; Nemmaoui, A.; García Lorca, A. M.

    2016-06-01

    Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m2) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.

  13. Review and evaluation of metallic TRU nuclear waste consolidation methods

    SciTech Connect

    Montgomery, D.R.; Nesbitt, J.F.

    1983-08-01

    The US Department of Energy established the Commercial Waste Treatment Program to develop, demonstrate, and deploy waste treatment technology. In this report, viable methods are identified that could consolidate the volume of metallic wastes generated in a fuel reprocessing facility. The purpose of this study is to identify, evaluate, and rate processes that have been or could be used to reduce the volume of contaminated/irradiated metallic waste streams and to produce an acceptable waste form in a safe and cost-effective process. A technical comparative evaluation of various consolidation processes was conducted, and these processes were rated as to the feasibility and cost of producing a viable product from a remotely operated radioactive process facility. Out of the wide variety of melting concepts and consolidation systems that might be applicable for consolidating metallic nuclear wastes, the following processes were selected for evaluation: inductoslay melting, rotating nonconsumable electrode melting, plasma arc melting, electroslag melting with two nonconsumable electrodes, vacuum coreless induction melting, and cold compaction. Each process was evaluated and rated on the criteria of complexity of process, state and type of development required, safety, process requirements, and facility requirements. It was concluded that the vacuum coreless induction melting process is the most viable process to consolidate nuclear metallic wastes. 11 references.

  14. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    DOEpatents

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  15. Research on the Comparability of Multi-attribute Evaluation Methods for Academic Journals

    NASA Astrophysics Data System (ADS)

    Liping, Yu

    This paper first constructs a classification framework for multi-attribute evaluation methods oriented to academic journals, and then discusses the comparability of the vast majority of non-linear evaluation methods and the majority of linear evaluation methods theoretically, taking the TOPSIS method as an example and the evaluation data on agricultural journals as an exercise of validation. The analysis result shows that we should attach enough importance to the comparability of evaluation methods for academic journals; the evaluation objectives are closely related to the choice of evaluation methods, and also relevant to the comparability of evaluation methods; the specialized organizations for journal evaluation had better release the evaluation data, evaluation methods and evaluation results to the best of their abilities; only purely subjective evaluation method is of broad comparability.

  16. [Evaluation of using statistical methods in selected national medical journals].

    PubMed

    Sych, Z

    1996-01-01

    The paper covers the performed evaluation of frequency with which the statistical methods were applied in analyzed works having been published in six selected, national medical journals in the years 1988-1992. For analysis the following journals were chosen, namely: Klinika Oczna, Medycyna Pracy, Pediatria Polska, Polski Tygodnik Lekarski, Roczniki Państwowego Zakładu Higieny, Zdrowie Publiczne. Appropriate number of works up to the average in the remaining medical journals was randomly selected from respective volumes of Pol. Tyg. Lek. The studies did not include works wherein the statistical analysis was not implemented, which referred both to national and international publications. That exemption was also extended to review papers, casuistic ones, reviews of books, handbooks, monographies, reports from scientific congresses, as well as papers on historical topics. The number of works was defined in each volume. Next, analysis was performed to establish the mode of finding out a suitable sample in respective studies, differentiating two categories: random and target selections. Attention was also paid to the presence of control sample in the individual works. In the analysis attention was also focussed on the existence of sample characteristics, setting up three categories: complete, partial and lacking. In evaluating the analyzed works an effort was made to present the results of studies in tables and figures (Tab. 1, 3). Analysis was accomplished with regard to the rate of employing statistical methods in analyzed works in relevant volumes of six selected, national medical journals for the years 1988-1992, simultaneously determining the number of works, in which no statistical methods were used. Concurrently the frequency of applying the individual statistical methods was analyzed in the scrutinized works. Prominence was given to fundamental statistical methods in the field of descriptive statistics (measures of position, measures of dispersion) as well as

  17. New aspects for the evaluation of radioactive waste disposal methods

    SciTech Connect

    Seiler, F.A.; Alvarez, J.L.

    1996-12-31

    For the performance assessment of radioactive and hazardous waste disposal sites, risk assessments are usually performed for the long term, i.e., over an interval in space and time for which one can predict movement and behavior of toxic agents in the environment. This approach is based on at least three implicit assumptions: One, that the engineering layout will take care of the immediate endangerment of potential receptors; two, that one has carefully evaluated just how far out in space and time the models can be extrapolated, and three, that one can evaluate potential health effects for very low exposures. A few of these aspects will be discussed here in the framework of the scientific method.

  18. Evaluation of pediatric manual wheelchair mobility using advanced biomechanical methods.

    PubMed

    Slavens, Brooke A; Schnorenberg, Alyssa J; Aurit, Christine M; Graf, Adam; Krzak, Joseph J; Reiners, Kathryn; Vogel, Lawrence C; Harris, Gerald F

    2015-01-01

    There is minimal research of upper extremity joint dynamics during pediatric wheelchair mobility despite the large number of children using manual wheelchairs. Special concern arises with the pediatric population, particularly in regard to the longer duration of wheelchair use, joint integrity, participation and community integration, and transitional care into adulthood. This study seeks to provide evaluation methods for characterizing the biomechanics of wheelchair use by children with spinal cord injury (SCI). Twelve subjects with SCI underwent motion analysis while they propelled their wheelchair at a self-selected speed and propulsion pattern. Upper extremity joint kinematics, forces, and moments were computed using inverse dynamics methods with our custom model. The glenohumeral joint displayed the largest average range of motion (ROM) at 47.1° in the sagittal plane and the largest average superiorly and anteriorly directed joint forces of 6.1% BW and 6.5% BW, respectively. The largest joint moments were 1.4% body weight times height (BW × H) of elbow flexion and 1.2% BW × H of glenohumeral joint extension. Pediatric manual wheelchair users demonstrating these high joint demands may be at risk for pain and upper limb injuries. These evaluation methods may be a useful tool for clinicians and therapists for pediatric wheelchair prescription and training. PMID:25802860

  19. Quantitative methods for somatosensory evaluation in atypical odontalgia.

    PubMed

    Porporatti, André Luís; Costa, Yuri Martins; Stuginski-Barbosa, Juliana; Bonjardim, Leonardo Rigoldi; Conti, Paulo César Rodrigues; Svensson, Peter

    2015-01-01

    A systematic review was conducted to identify reliable somatosensory evaluation methods for atypical odontalgia (AO) patients. The computerized search included the main databases (MEDLINE, EMBASE, and Cochrane Library). The studies included used the following quantitative sensory testing (QST) methods: mechanical detection threshold (MDT), mechanical pain threshold (MPT) (pinprick), pressure pain threshold (PPT), dynamic mechanical allodynia with a cotton swab (DMA1) or a brush (DMA2), warm detection threshold (WDT), cold detection threshold (CDT), heat pain threshold (HPT), cold pain detection (CPT), and/or wind-up ratio (WUR). The publications meeting the inclusion criteria revealed that only mechanical allodynia tests (DMA1, DMA2, and WUR) were significantly higher and pain threshold tests to heat stimulation (HPT) were significantly lower in the affected side, compared with the contralateral side, in AO patients; however, for MDT, MPT, PPT, CDT, and WDT, the results were not significant. These data support the presence of central sensitization features, such as allodynia and temporal summation. In contrast, considerable inconsistencies between studies were found when AO patients were compared with healthy subjects. In clinical settings, the most reliable evaluation method for AO in patients with persistent idiopathic facial pain would be intraindividual assessments using HPT or mechanical allodynia tests. PMID:25627886

  20. Perinatal program evaluations: methods, impacts, and future goals.

    PubMed

    Thomas, Suzanne D; Hudgins, Jodi L; Sutherland, Donald E; Ange, Brittany L; Mobley, Sandra C

    2015-07-01

    The objective of this methodology note is to examine perinatal program evaluation methods as they relate to the life course health development model (LCHD) and risk reduction for poor birth outcomes. We searched PubMed, CDC, ERIC, and a list from the Association of Maternal and Child Health Programs (AMCHP) to identify sources. We included reports from theory, methodology, program reports, and instruments, as well as reviews of Healthy Start Programs and home visiting. Because our review focused upon evaluation methods we did not include reports that described the Healthy Start Program. The LCHD model demonstrates the non-linear relationships among epigenetic factors and environmental interactions, intentionality or worldview within a values framework, health practices, and observed outcomes in a lifelong developmental health trajectory. The maternal epigenetic and social environment during fetal development sets the stage for the infant's lifelong developmental arc. The LCHD model provides a framework to study challenging maternal child health problems. Research that tracks the long term maternal-infant health developmental trajectory is facilitated by multiple, linked public record systems. Two instruments, the life skills progression instrument and the prenatal risk overview are theoretically consistent with the LCHD and can be adapted for local or population-based use. A figure is included to demonstrate a method of reducing interaction among variables by sample definition. Both in-place local programs and tests of best practices in community-based research are needed to reduce unacceptably high infant mortality. Studies that follow published reporting standards strengthen evidence.

  1. Systematic Comparative Evaluation of Methods for Investigating the TCRβ Repertoire.

    PubMed

    Liu, Xiao; Zhang, Wei; Zeng, Xiaojing; Zhang, Ruifang; Du, Yuanping; Hong, Xueyu; Cao, Hongzhi; Su, Zheng; Wang, Changxi; Wu, Jinghua; Nie, Chao; Xu, Xun; Kristiansen, Karsten

    2016-01-01

    High-throughput sequencing has recently been applied to profile the high diversity of antibodyome/B cell receptors (BCRs) and T cell receptors (TCRs) among immune cells. To date, Multiplex PCR (MPCR) and 5'RACE are predominately used to enrich rearranged BCRs and TCRs. Both approaches have advantages and disadvantages; however, a systematic evaluation and direct comparison of them would benefit researchers in the selection of the most suitable method. In this study, we used both pooled control plasmids and spiked-in cells to benchmark the MPCR bias. RNA from three healthy donors was subsequently processed with the two methods to perform a comparative evaluation of the TCR β chain sequences. Both approaches demonstrated high reproducibility (R2 = 0.9958 and 0.9878, respectively). No differences in gene usage were identified for most V/J genes (>60%), and an average of 52.03% of the CDR3 amino acid sequences overlapped. MPCR exhibited a certain degree of bias, in which the usage of several genes deviated from 5'RACE, and some V-J pairings were lost. In contrast, there was a smaller rate of effective data from 5'RACE (11.25% less compared with MPCR). Nevertheless, the methodological variability was smaller compared with the biological variability. Through direct comparison, these findings provide novel insights into the two experimental methods, which will prove to be valuable in immune repertoire research and its interpretation.

  2. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, Franklin R.; Bosilovich, Michael; Lyon, Bradfield; Funk, Chris

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period

  3. Evaluation of pediatric manual wheelchair mobility using advanced biomechanical methods.

    PubMed

    Slavens, Brooke A; Schnorenberg, Alyssa J; Aurit, Christine M; Graf, Adam; Krzak, Joseph J; Reiners, Kathryn; Vogel, Lawrence C; Harris, Gerald F

    2015-01-01

    There is minimal research of upper extremity joint dynamics during pediatric wheelchair mobility despite the large number of children using manual wheelchairs. Special concern arises with the pediatric population, particularly in regard to the longer duration of wheelchair use, joint integrity, participation and community integration, and transitional care into adulthood. This study seeks to provide evaluation methods for characterizing the biomechanics of wheelchair use by children with spinal cord injury (SCI). Twelve subjects with SCI underwent motion analysis while they propelled their wheelchair at a self-selected speed and propulsion pattern. Upper extremity joint kinematics, forces, and moments were computed using inverse dynamics methods with our custom model. The glenohumeral joint displayed the largest average range of motion (ROM) at 47.1° in the sagittal plane and the largest average superiorly and anteriorly directed joint forces of 6.1% BW and 6.5% BW, respectively. The largest joint moments were 1.4% body weight times height (BW × H) of elbow flexion and 1.2% BW × H of glenohumeral joint extension. Pediatric manual wheelchair users demonstrating these high joint demands may be at risk for pain and upper limb injuries. These evaluation methods may be a useful tool for clinicians and therapists for pediatric wheelchair prescription and training.

  4. Evaluation of roundness error based on improved area hunting method

    NASA Astrophysics Data System (ADS)

    Zhan, Weiwei; Xue, Zi; Wu, Yongbo

    2010-08-01

    Rotary parts are used commonly in the field of precision machinery and their roundness error can affect installation accuracy greatly which determines the performance of machine. It is essential to establish a proper method for evaluating roundness error to ensure accurate assessment. However, various evaluation algorithms are time-consuming, complex and inaccuracy which can not meet the challenge of precision measurement. In this paper, an improved area hunting method which used minimum zone circle (MZC), minimum circumscribed circle (MCC) and maximum inscribed circle (MIC) as reference circle was proposed. According to specific area hunting rules of different reference circles, a new marked point which was closer to the real center of reference circle was located from grid cross points around previous marked point. Searched area was decreasing and the process of area hunting terminated when iteration accuracy was satisfied. This approach was realized in a precision form measurement instrument developed in NIM. The test results indicated that this improved method was efficient, accurate and can be easily implemented in precision roundness measurement.

  5. Systematic Comparative Evaluation of Methods for Investigating the TCRβ Repertoire.

    PubMed

    Liu, Xiao; Zhang, Wei; Zeng, Xiaojing; Zhang, Ruifang; Du, Yuanping; Hong, Xueyu; Cao, Hongzhi; Su, Zheng; Wang, Changxi; Wu, Jinghua; Nie, Chao; Xu, Xun; Kristiansen, Karsten

    2016-01-01

    High-throughput sequencing has recently been applied to profile the high diversity of antibodyome/B cell receptors (BCRs) and T cell receptors (TCRs) among immune cells. To date, Multiplex PCR (MPCR) and 5'RACE are predominately used to enrich rearranged BCRs and TCRs. Both approaches have advantages and disadvantages; however, a systematic evaluation and direct comparison of them would benefit researchers in the selection of the most suitable method. In this study, we used both pooled control plasmids and spiked-in cells to benchmark the MPCR bias. RNA from three healthy donors was subsequently processed with the two methods to perform a comparative evaluation of the TCR β chain sequences. Both approaches demonstrated high reproducibility (R2 = 0.9958 and 0.9878, respectively). No differences in gene usage were identified for most V/J genes (>60%), and an average of 52.03% of the CDR3 amino acid sequences overlapped. MPCR exhibited a certain degree of bias, in which the usage of several genes deviated from 5'RACE, and some V-J pairings were lost. In contrast, there was a smaller rate of effective data from 5'RACE (11.25% less compared with MPCR). Nevertheless, the methodological variability was smaller compared with the biological variability. Through direct comparison, these findings provide novel insights into the two experimental methods, which will prove to be valuable in immune repertoire research and its interpretation. PMID:27019362

  6. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Roberts, J. Brent; Bosilovich, Michael; Lyon, Bradfield

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period.

  7. Evaluation of Pediatric Manual Wheelchair Mobility Using Advanced Biomechanical Methods

    PubMed Central

    Slavens, Brooke A.; Schnorenberg, Alyssa J.; Aurit, Christine M.; Graf, Adam; Krzak, Joseph J.; Reiners, Kathryn; Vogel, Lawrence C.; Harris, Gerald F.

    2015-01-01

    There is minimal research of upper extremity joint dynamics during pediatric wheelchair mobility despite the large number of children using manual wheelchairs. Special concern arises with the pediatric population, particularly in regard to the longer duration of wheelchair use, joint integrity, participation and community integration, and transitional care into adulthood. This study seeks to provide evaluation methods for characterizing the biomechanics of wheelchair use by children with spinal cord injury (SCI). Twelve subjects with SCI underwent motion analysis while they propelled their wheelchair at a self-selected speed and propulsion pattern. Upper extremity joint kinematics, forces, and moments were computed using inverse dynamics methods with our custom model. The glenohumeral joint displayed the largest average range of motion (ROM) at 47.1° in the sagittal plane and the largest average superiorly and anteriorly directed joint forces of 6.1% BW and 6.5% BW, respectively. The largest joint moments were 1.4% body weight times height (BW × H) of elbow flexion and 1.2% BW × H of glenohumeral joint extension. Pediatric manual wheelchair users demonstrating these high joint demands may be at risk for pain and upper limb injuries. These evaluation methods may be a useful tool for clinicians and therapists for pediatric wheelchair prescription and training. PMID:25802860

  8. Formaldehyde: a comparative evaluation of four monitoring methods

    SciTech Connect

    Coyne, L.B.; Cook, R.E.; Mann, J.R.; Bouyoucos, S.; McDonald, O.F.; Baldwin, C.L.

    1985-10-01

    The performances of four formaldehyde monitoring devices were compared in a series of laboratory and field experiments. The devices evaluated included the DuPont C-60 formaldehyde badge, the SKC impregnated charcoal tube, an impinger/polarographic method and the MDA Lion formaldemeter. The major evaluation parameters included: concentration range, effects of humidity, sample storage, air velocity, accuracy, precision, interferences from methanol, styrene, 1,3-butadiene, sulfur dioxide and dimethylamine. Based on favorable performances in the laboratory and field, each device was useful for monitoring formaldehyde in the industrial work environment; however, these devices were not evaluated for residential exposure assessment. The impinger/polarographic method had a sensitivity of 0.06 ppm, based on a 20-liter air sample volume, and accurately determined the short-term excursion limit (STEL). It was useful for area monitoring but was not very practical for time-weighted average (TWA) personal monitoring measurements. The DuPont badge had a sensitivity of 2.8 ppm-hr and accurately and simply determined TWA exposures. It was not sensitive enough to measure STEL exposures, however, and positive interferences resulted if 1,3-butadiene was present. The SKC impregnated charcoal tube measured both TWA and STEL concentrations and had a sensitivity of 0.06 ppm based on a 25-liter air sample volume. Lightweight and simple to use, the MDA Lion formaldemeter had a sensitivity of 0.2 ppm. It had the advantage of giving an instantaneous reading in the field; however, it must be used with caution because it responded to many interferences. The method of choice depended on the type of sampling required, field conditions encountered during sampling and an understanding of the limitations of each monitoring device.

  9. Improvement of dem Generation from Aster Images Using Satellite Jitter Estimation and Open Source Implementation

    NASA Astrophysics Data System (ADS)

    Girod, L.; Nuth, C.; Kääb, A.

    2015-12-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a source of stereoscopic images covering the whole globe at a 15m resolution at a consistent quality for over 15 years. The potential of this data in terms of geomorphological analysis and change detection in three dimensions is unrivaled and needs to be exploited. However, the quality of the DEMs and ortho-images currently delivered by NASA (ASTER DMO products) is often of insufficient quality for a number of applications such as mountain glacier mass balance. For this study, the use of Ground Control Points (GCPs) or of other ground truth was rejected due to the global "big data" type of processing that we hope to perform on the ASTER archive. We have therefore developed a tool to compute Rational Polynomial Coefficient (RPC) models from the ASTER metadata and a method improving the quality of the matching by identifying and correcting jitter induced cross-track parallax errors. Our method outputs more accurate DEMs with less unmatched areas and reduced overall noise. The algorithms were implemented in the open source photogrammetric library and software suite MicMac.

  10. Tectonic development of the Northwest Bonaparte Basin, Australia by using Digital Elevation Model (DEM)

    NASA Astrophysics Data System (ADS)

    Wahid, Ali; Salim, Ahmed Mohamed Ahmed; Ragab Gaafar, Gamal; Yusoff, AP Wan Ismail Wan

    2016-02-01

    The Bonaparte Basin consist of majorly offshore part is situated at Australia's NW continental margin, covers an area of approx. 270,000km2. Bonaparte Basin having a number of sub-basins and platform areas of Paleozoic and Mesozoic is structurally complex. This research established the geologic and geomorphologic studies using Digital Elevation Model (DEM) as a substitute approach in morphostructural analysis to unravel the geological complexities. Although DEMs have been in practice since 1990s, they still have not become common tool for mapping studies. The research work comprised of regional structural analysis with the help of integrated elevation data, satellite imageries, available open topograhic images and internal geological maps with interpreted seismic. The structural maps of the study area have been geo-referenced which further overlaid onto SRTM data and satellite images for combined interpretation which facilitate to attain Digital Elevation Model of the study area. The methodology adopts is to evaluate and redefine development of geodynamic processes involved in formation of Bonaparte Basin. The main objectives is to establish the geological histories by using digital elevation model. The research work will be useful to incorporate different tectonic events occurred at different Geological times in a digital elevation model. The integrated tectonic analysis of different digital data sets benefitted substantially from combining them into a common digital database. Whereas, the visualization software facilitates the overlay and combined interpretation of different data sets which is helpful to reveal hidden information not obvious or accessible otherwise for regional analysis.

  11. Evaluation of Uranium Measurements in Water by Various Methods - 13571

    SciTech Connect

    Tucker, Brian J.; Workman, Stephen M.

    2013-07-01

    In December 2000, EPA amended its drinking water regulations for radionuclides by adding a Maximum Contaminant Level (MCL) for uranium (so called MCL Rule)[1] of 30 micrograms per liter (μg/L). The MCL Rule also included MCL goals of zero for uranium and other radionuclides. Many radioactively contaminated sites must test uranium in wastewater and groundwater to comply with the MCL rule as well as local publicly owned treatment works discharge limitations. This paper addresses the relative sensitivity, accuracy, precision, cost and comparability of two EPA-approved methods for detection of total uranium: inductively plasma/mass spectrometry (ICP-MS) and alpha spectrometry. Both methods are capable of measuring the individual uranium isotopes U-234, U- 235, and U-238 and both methods have been deemed acceptable by EPA. However, the U-238 is by far the primary contributor to the mass-based ICP-MS measurement, especially for naturally-occurring uranium, which contains 99.2745% U-238. An evaluation shall be performed relative to the regulatory requirement promulgated by EPA in December 2000. Data will be garnered from various client sample results measured by ALS Laboratory in Fort Collins, CO. Data shall include method detection limits (MDL), minimum detectable activities (MDA), means and trends in laboratory control sample results, performance evaluation data for all methods, and replicate results. In addition, a comparison will be made of sample analyses results obtained from both alpha spectrometry and the screening method Kinetic Phosphorescence Analysis (KPA) performed at the U.S. Army Corps of Engineers (USACE) FUSRAP Maywood Laboratory (UFML). Many uranium measurements occur in laboratories that only perform radiological analysis. This work is important because it shows that uranium can be measured in radiological as well as stable chemistry laboratories and it provides several criteria as a basis for comparison of two uranium test methods. This data will

  12. Further evaluation of the constrained least squares electromagnetic compensation method

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1991-01-01

    Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.

  13. Method for evaluating moisture tensions of soils using spectral data

    NASA Technical Reports Server (NTRS)

    Peterson, John B. (Inventor)

    1982-01-01

    A method is disclosed which permits evaluation of soil moisture utilizing remote sensing. Spectral measurements at a plurality of different wavelengths are taken with respect to sample soils and the bidirectional reflectance factor (BRF) measurements produced are submitted to regression analysis for development therefrom of predictable equations calculated for orderly relationships. Soil of unknown reflective and unknown soil moisture tension is thereafter analyzed for bidirectional reflectance and the resulting data utilized to determine the soil moisture tension of the soil as well as providing a prediction as to the bidirectional reflectance of the soil at other moisture tensions.

  14. Pitch-Up Problem: A Criterion and Method of Evaluation

    NASA Technical Reports Server (NTRS)

    Sadoff, Melvin

    1959-01-01

    A method has been described for predicting the probable relative severity of pitch-up of a new airplane design prior to initial flight tests. An illustrative example has been presented which demonstrated the use of this procedure for evaluating the pitch-up behavior of a large, relatively flexible airplane. It has also been shown that for airplanes for which a mild pitch-up tendency is predicted, the wing and tail loads likely to be encountered in pitch-up maneuvers would not assume critical values, even for pilots unfamiliar with pitch-up.

  15. Evolution of methods for evaluating the occurrence of floods

    USGS Publications Warehouse

    Benson, M.A.

    1962-01-01

    A brief summary is given of the history of methods of expressing flood potentialities, proceeding from simple flood formulas to statistical methods of flood-frequency analysis on a regional basis. Current techniques are described and evaluated. Long-term flood records in the United States show no justification for the adoption of a single type of theoretical distribution of floods. The significance and predictive values of flood-frequency relations are considered. Because of the length of flood records available and the interdependence of flood events within a region, the probable long-term average magnitudes of floods of a given recurrence interval are uncertain. However, if the magnitudes defined by the records available are accepted, the relative effects of drainage-basin characteristics and climatic variables can be determined with a reasonable degree of assurance.

  16. Non-destructive evaluation method employing dielectric electrostatic ultrasonic transducers

    NASA Technical Reports Server (NTRS)

    Yost, William T. (Inventor); Cantrell, Jr., John H. (Inventor)

    2003-01-01

    An acoustic nonlinearity parameter (.beta.) measurement method and system for Non-Destructive Evaluation (NDE) of materials and structural members novelly employs a loosely mounted dielectric electrostatic ultrasonic transducer (DEUT) to receive and convert ultrasonic energy into an electrical signal which can be analyzed to determine the .beta. of the test material. The dielectric material is ferroelectric with a high dielectric constant .di-elect cons.. A computer-controlled measurement system coupled to the DEUT contains an excitation signal generator section and a measurement and analysis section. As a result, the DEUT measures the absolute particle displacement amplitudes in test material, leading to derivation of the nonlinearity parameter (.beta.) without the costly, low field reliability methods of the prior art.

  17. Clinical behavioral pharmacology: methods for evaluating medications and contingency management.

    PubMed Central

    Burgio, L D; Page, T J; Capriotti, R M

    1985-01-01

    We evaluated methods for comparing the effects of dextroamphetamine (Dexedrine), thioridazine (Mellaril), and contingency management in the control of severe behavior problems. A reversal design was used in which medications were systematically titrated and assessed in unstructured as well as structured settings with three clients. Subsequently, behavioral procedures including timeout, differential reinforcement of other behavior, and visual screening, were used in a multiple-baseline design across settings. The assessment and design methods were useful in comparing the interventions. Dextroamphetamine decreased inappropriate behaviors and improved academic behaviors in one client, but no reliable effects were observed in the other two clients. Thioridazine was variable across clients, settings, behaviors, and dosages. Contingency management produced consistent decreases in inappropriate behaviors and small improvements in academic performance. PMID:3997697

  18. Infrared non-destructive evaluation method and apparatus

    SciTech Connect

    Baleine, Erwan; Erwan, James F; Lee, Ching-Pang; Stinelli, Stephanie

    2014-10-21

    A method of nondestructive evaluation and related system. The method includes arranging a test piece (14) having an internal passage (18) and an external surface (15) and a thermal calibrator (12) within a field of view (42) of an infrared sensor (44); generating a flow (16) of fluid characterized by a fluid temperature; exposing the test piece internal passage (18) and the thermal calibrator (12) to fluid from the flow (16); capturing infrared emission information of the test piece external surface (15) and of the thermal calibrator (12) simultaneously using the infrared sensor (44), wherein the test piece infrared emission information includes emission intensity information, and wherein the thermal calibrator infrared emission information includes a reference emission intensity associated with the fluid temperature; and normalizing the test piece emission intensity information against the reference emission intensity.

  19. A Method to Evaluate Hormesis in Nanoparticle Dose-Responses

    PubMed Central

    Nascarella, Marc A.; Calabrese, Edward J.

    2012-01-01

    The term hormesis describes a dose-response relationship that is characterized by a response that is opposite above and below the toxicological or pharmacological threshold. Previous reports have shown that this relationship is ubiquitous in the response of pharmaceuticals, metals, organic chemicals, radiation, and physical stressor agents. Recent reports have also indicated that certain nanoparticles (NPs) may also exhibit a hormetic dose-response. We describe the application of three previously described methods to quantify the magnitude of the hormetic biphasic dose-responses in nanotoxicology studies. This methodology is useful in screening assays that attempt to parse the observed toxicological dose-response data into categories based on the magnitude of hormesis in the evaluation of NPs. For example, these methods may be used to quickly identify NP induced hormetic responses that are either desirably enhanced (e.g., neuronal cell viability) or undesirably stimulated (e.g., low dose stimulation of tumor cells). PMID:22942868

  20. Scatterscore : A reconnaissance method to evaluate changes in water quality

    SciTech Connect

    Kim, A.G.; Cardone, C.R.

    2005-12-01

    Water quality data collected in periodic monitoring programs are often difficult to evaluate, especially if the number of parameters is large, the sampling schedule varies, and values are of different orders of magnitude. The Scatterscore Water Quality Evaluation was developed to yield a quantitative score, based on all measured variables in periodic water quality reports, indicating positive, negative or random change. This new methodology calculates a reconnaissance score based on the differences between up-gradient (control) versus down-gradient (treatment) water quality data sets. All parameters measured over a period of time at two or more sampling points are compared. The relationship between the ranges of measured values and the ratio of the medians for each parameter produces a data point that falls into one of four sections on a scattergram. The number and average values of positive, negative and random change points is used to calculate a Scatterscore that indicates the magnitude and direction of overall change in water quality. The Scatterscore Water Quality Evaluation, a reconnaissance method to track general changes, has been applied to 20 sites at which coal utilization by-products (CUB) were used to control acid mine drainage (AMD).

  1. Application of CryoSat-2 data product for DEM generation in Dome-A summit area, Antarctica

    NASA Astrophysics Data System (ADS)

    fang, W.; Cheng, X.; Hui, F.

    2012-12-01

    Currently available Digital Elevation Models (DEMs) of Dome A were originally derived from radar altimetry data (ERS-1/2, GLAS/ICESat), and later improved by GPS measurements. The relatively low resolution and coverage poses a problem, especially for the regional research. CryoSat-2 with SIRAL (SAR/Interferometric Altimeter) was launched on 8 April 2010, providing an alternative for high-density and high-accuracy acquisition of terrain point data. The inclination of the satellite's orbit is 92°, and the orbit can approach latitude of 88°. The repeat period of 369 days provides a high orbit crossover density (10 crossovers km-2 year-1 at 87°) with a 30-day sub-cycle. In this study, we collected ten months (March to December 2011) of successive CryoSat-2 Low Rate Mode level 2 (LRM L2) datasets. Two types of filters were applied to remove additional elevation outliers. These filtering procedures excluded 5.95% of the original data. According to the distribution of the point data, terrain modeling of grid DEM, interpolation method of Kriging (ordinary Kriging), and a grid resolution of 200m is chosen for DEM generation in this study. Finally, we used the satellite's monthly revisits with non-repeated coverage and present a novel DEM of 900 km2 in the Dome A region centered at Kunlun Station (80°25‧01″S, 77°06‧58″E). It shows that the topography of the Dome A region is saddle-shaped, with a northern peak and a southern peak. We used a subtraction method to compare the novel DEM with the previous DEM of GPS measurements. The elevation differences exhibit a positive average elevation bias. It may be due to the penetration of the Ku-band radar wave into the soft snow. As a first approximation based on the statistics of the height differences, we estimate that the average penetration depth of the CryoSat-2 Ku-band wave in this area is 1 m. Map of surface topography over the Dome A region generated from CryoSat-2 data. Contours are smoothed. The contour interval

  2. Assessment of Reference Height Models on Quality of Tandem-X dem

    NASA Astrophysics Data System (ADS)

    Mirzaee, S.; Motagh, M.; Arefi, H.

    2015-12-01

    The aim of this study is to investigate the effect of various Global Digital Elevation Models (GDEMs) in producing high-resolution topography model using TanDEM-X (TDX) Coregistered Single Look Slant Range Complex (CoSSC) images. We selected an image acquired on Jun 12th, 2012 over Doroud region in Lorestan, west of Iran and used 4 external digital elevation models in our processing including DLR/ASI X-SAR DEM (SRTM-X, 30m resolution), ASTER GDEM Version 2 (ASTER-GDEMV2, 30m resolution), NASA SRTM Version 4 (SRTM-V4, 90m resolution), and a local photogrammetry-based DEM prepared by National Cartographic Center (NCC DEM, 10m resolution) of Iran. InSAR procedure for DEM generation was repeated four times with each of the four external height references. The quality of each external DEM was initially assessed using ICESat filtered points. Then, the quality of, each TDX-based DEM was assessed using the more precise external DEM selected in the previous step. Results showed that both local (NCC) DEM and SRTM X-band performed the best (RMSE< 9m) for TDX-DEM generation. In contrast, ASTER GDEM v2 and SRTM C-band v4 showed poorer quality.

  3. Evaluation of mercury speciation by EPA (Draft) Method 29

    SciTech Connect

    Laudal, D.L.; Heidt, M.K.; Nott, B.

    1995-11-01

    The 1990 Clean Air Act Amendments require that the U.S. Environmental protection Agency (EPA) assess the health risks associated with mercury emissions. Also, the law requires a separate assessment of health risks posed by the emission of 189 tract chemicals (including mercury) for electric utility steam-generating units. In order to conduct a meaningful assessment of health and environmental effects, we must have, among other things, a reliable and accurate method to measure mercury emissions. In addition, the rate of mercury deposition and the type of control strategies used may depend upon the type of mercury emitted (i.e., whether it is in the oxidized or elemental form). It has been speculated that EPA (Draft) Method 29 can speciate mercury by selective absorption; however, this claim has yet to be proven. The Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE) have contracted with the Energy & Environmental Research Center (EERC) at University of North Dakota to evaluate EPA (Draft) Method 29 at the pilot-scale level. The objective of the work is to determine whether EPA (Draft) Method 29 can reliably quantify and speciate mercury in the flue gas from coal-fired boilers.

  4. How to evaluate the separation efficiency of CZE methods?

    PubMed

    Büttner-Merz, C; Holzgrabe, U

    2011-06-01

    In trying to estimate the separation efficiency of Capillary Electrophoresis (CE) methods, the resolution (RS), the number of theoretical plates (N) and the peak-to-valley ratio (p/v) are often used assessment criteria. This study demonstrates that these criteria are not as suitable to describe the separation efficiency in case of Capillary Zone Electrophoresis (CZE) methods as they are for Liquid Chromatography (LC) methods. The investigations were performed by means of a validated CZE method for the evaluation of tetracyclines and their related substances. Four impurities of tetracycline hydrochloride are described in the European Pharmacopoeia. Three were found in the sample used for our investigations, i.e. epi-tetracycline formed by keto-enol-tautomerism, anhydrotetracyclin and epi-anhydrotetracyline. It could be shown that higher values of these assessment criteria like RS do not necessarily represent better separation. Thus, a discussion on the usefulness of separation selectivity and efficiency as assessment criteria for capillary electrophoresis as well as on the introduction of additional parameters is needed.

  5. Development of nondestructive evaluation methods for structural ceramics

    SciTech Connect

    Ellingson, W.A.; Koehl, R.D.; Wilson, J.A.; Stuckey, J.B.; Engel, H.P. |

    1996-04-01

    Nondestructive evaluation (NDE) methods using three-dimensional microfocus X-ray computed tomographic imaging (3DXCT) were employed to map axial and radial density variations in hot-gas filters and heat exchanger tubes. 3D XCT analysis was conducted on (a) two 38-mm-OD, 6.5-mm wall, SiC/SiC heat exchanger tubes infiltrated by CVI; (b) eight 10 cm diam. oxide/oxide heat exchanger tubes; and (c) one 26-cm-long Nextel fiber/SiC matrix hot-gas filter. The results show that radial and axial density uniformity as well as porosity, can be assessed by 3D XCT. NDE methods are also under development to assess thermal barrier coatings which are under development as methods to protect gas-turbine first-stage hot section metallic substrates. Further, because both shop and field joining of CFCC materials will be necessary, work is now beginning on development of NDE methods for joining.

  6. Quality Evaluation of Pork with Various Freezing and Thawing Methods.

    PubMed

    Ku, Su Kyung; Jeong, Ji Yun; Park, Jong Dae; Jeon, Ki Hong; Kim, Eun Mi; Kim, Young Boong

    2014-01-01

    In this study, the physicochemical and sensory quality characteristics due to the influence of various thawing methods on electro-magnetic and air blast frozen pork were examined. The packaged pork samples, which were frozen by air blast freezing at -45℃ or electro-magnetic freezing at -55℃, were thawed using 4 different methods: refrigeration (4±1℃), room temperature (RT, 25℃), cold water (15℃), and microwave (2450 MHz). Analyses were carried out to determine the drip and cooking loss, water holding capacity (WHC), moisture content and sensory evaluation. Frozen pork thawed in a microwave indicated relatively less thawing loss (0.63-1.24%) than the other thawing methods (0.68-1.38%). The cooking loss after electro-magnetic freezing indicated 37.4% by microwave thawing, compared with 32.9% by refrigeration, 36.5% by RT, and 37.2% by cold water in ham. The thawing of samples frozen by electro-magnetic freezing showed no significant differences between the methods used, while the moisture content was higher in belly thawed by microwave (62.0%) after electro-magnetic freezing than refrigeration (54.8%), RT (61.3%), and cold water (61.1%). The highest overall acceptability was shown for microwave thawing after electro-magnetic freezing but there were no significant differences compared to that of the other samples.

  7. An experimental database for evaluating PIV uncertainty quantification methods

    NASA Astrophysics Data System (ADS)

    Warner, Scott; Neal, Douglas; Sciacchitano, Andrea

    2014-11-01

    Uncertainty quantification for particle image velocimetry (PIV) data has recently become a topic of great interest as shown by the publishing of several different methods within the past few years. A unique experiment has been designed to test the efficacy of PIV uncertainty methods, using a rectangular jet as the flow field. The novel aspect of the experimental setup consists of simultaneous measurements by means of two different time-resolved PIV systems and a hot-wire anemometer (HWA). The first PIV system, called the ``PIV-Measurement'' system, collects the data for which uncertainty is to be evaluated. It is based on a single camera and features a dynamic velocity range (DVR) representative of many PIV experiments. The second PIV system, called the ``PIV-HDR'' (high dynamic range) system, has a significantly higher DVR obtained with a higher digital imaging resolution. The hot-wire was placed in close proximity to the PIV measurement domain. All three of the measurement systems were carefully set to simultaneously collect time-resolved data on a point-by-point basis. The HWA validates the PIV-HDR system as the reference velocity so that it can be used to evaluate the instantaneous error in the PIV-measurement system.

  8. A power flow method for evaluating vibration from underground railways

    NASA Astrophysics Data System (ADS)

    Hussein, M. F. M.; Hunt, H. E. M.

    2006-06-01

    One of the major sources of ground-borne vibration is the running of trains in underground railway tunnels. Vibration is generated at the wheel-rail interface, from where it propagates through the tunnel and surrounding soil into nearby buildings. An understanding of the dynamic interfaces between track, tunnel and soil is essential before engineering solutions to the vibration problem can be found. A new method has been developed to evaluate the effectiveness of vibration countermeasures. The method is based on calculating the mean power flow from the tunnel, paying attention to that part of the power which radiates upwards to places where buildings' foundations are expected to be found. The mean power is calculated for an infinite train moving through the tunnel with a constant velocity. An elegant mathematical expression for the mean power flow is derived, which can be used with any underground-tunnel model. To evaluate the effect of vibration countermeasures and track properties on power flow, a comprehensive three-dimensional analytical model is used. It consists of Euler-Bernoulli beams to account for the rails and the track slab. These are coupled in the wavenumber-frequency domain to a thin shell representing the tunnel embedded within an infinite continuum, with a cylindrical cavity representing the surrounding soil.

  9. Single well tracer method to evaluate enhanced recovery

    DOEpatents

    Sheely, Jr., Clyde Q.; Baldwin, Jr., David E.

    1978-01-01

    Data useful to evaluate the effectiveness of or to design an enhanced recovery process (the recovery process involving mobilizing and moving hydrocarbons through a hydrocarbon-bearing subterranean formation from an injection well to a production well by injecting a mobilizing fluid into the injection well) are obtained by a process which comprises sequentially: determining hydrocarbon saturation in the formation in a volume in the formation near a well bore penetrating the formation, injecting sufficient of the mobilizing fluid to mobilize and move hydrocarbons from a volume in the formation near the well bore penetrating the formation, and determining by the single well tracer method a hydrocarbon saturation profile in a volume from which hydrocarbons are moved. The single well tracer method employed is disclosed by U.S. Pat. No. 3,623,842. The process is useful to evaluate surfactant floods, water floods, polymer floods, CO.sub.2 floods, caustic floods, micellar floods, and the like in the reservoir in much less time at greatly reduced costs, compared to conventional multi-well pilot test.

  10. Evaluating survey instruments and methods in a steep channel

    NASA Astrophysics Data System (ADS)

    Scott, Daniel N.; Brogan, Daniel J.; Lininger, Katherine B.; Schook, Derek M.; Daugherty, Ellen E.; Sparacino, Matthew S.; Patton, Annette I.

    2016-11-01

    Methods for surveying and analyzing channel bed topography commonly lack a rigorous characterization of their appropriateness for project objectives. We compare four survey methods: a hand level, two different methods of surveying with a laser rangefinder, and a real-time kinematic GNSS (RTK-GNSS) to explore their accuracy in determining channel bed slope and roughness for a study reach in a small, dry, steep channel. Additionally, we evaluate the variability among four operators for each survey technique. Two methods of calculating reach slope were computed: a regression on the channel profile and a calculation using only survey endpoints. Using data from the RTK-GNSS as our accuracy reference, the hand level and two-person laser rangefinder surveying systems performed with high accuracy (< 5% error in estimating slope, < 10% error in estimating roughness), while the one-person laser rangefinder survey system performed with considerably lower accuracy (up to 54% error in roughness and slope). Variability between operators was found to be very low (coefficients of variation ranged from 0.001 to 0.046) for all survey systems except the one-person laser rangefinder system, suggesting that survey data collected by different operators can be validly compared. Due to reach-scale concavity, calculating slope using a regression produced significantly different values than those obtained by using only survey endpoints, suggesting that caution must be taken in choosing the most appropriate method of calculating slope for a given project objective. We present recommendations for choosing appropriate survey and analysis methods to accomplish various surveying objectives.

  11. SediFoam: A general-purpose, open-source CFD-DEM solver for particle-laden flow with emphasis on sediment transport

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Xiao, Heng

    2016-04-01

    With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(107) particles.

  12. Methods for external event screening quantification: Risk Methods Integration and Evaluation Program (RMIEP) methods development

    SciTech Connect

    Ravindra, M.K.; Banon, H. )

    1992-07-01

    In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described.

  13. Methods for external event screening quantification: Risk Methods Integration and Evaluation Program (RMIEP) methods development

    SciTech Connect

    Ravindra, M.K.; Banon, H.

    1992-07-01

    In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC`s PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described.

  14. Using crowdsourcing to evaluate published scientific literature: methods and example.

    PubMed

    Brown, Andrew W; Allison, David B

    2014-01-01

    Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, "Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?" 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters' conclusions was moderate (κ = 0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors' expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and low

  15. Using crowdsourcing to evaluate published scientific literature: methods and example.

    PubMed

    Brown, Andrew W; Allison, David B

    2014-01-01

    Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, "Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?" 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters' conclusions was moderate (κ = 0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors' expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and low

  16. Development of nondestructive evaluation methods for hot gas filters.

    SciTech Connect

    Ellingson, W. A.; Koehl, E. R.; Sun, J. G.; Deemer, C.; Lee, H.; Spohnholtz, T.; Energy Technology

    1999-01-01

    Rigid ceramic hot gas candle filters are currently under development for high-temperature hot gas particulate cleanup in advanced coal-based power systems. The ceramic materials for these filters include monolithics (usually non-oxides), oxide and non-oxide fiber-reinforced composites, and recrystallized silicon carbide. A concern of end users in using these types of filters, where over 3000 may be used in a single installation, is the lack of a data base on which to base decisions for reusing, replacing or predicting remaining life during plant shutdowns. One method to improve confidence of usage is to develop nondestructive evaluation (NDE) technology to provide surveillance methods for determination of the extent of damage or of life-limiting characteristics such as thermal fatigue, oxidation, damage from ash bridging such as localized cracking, damage from local burning, and elongation at elevated temperatures. Although in situ NDE methods would be desirable in order to avoid disassembly of the candle filter vessels, the possible presence of filter cakes and/or ash bridging, and the state of current NDE technology prevent this. Thus, off-line NDE methods, if demonstrated to be reliable, fast and cost effective, could be a significant step forward in developing confidence in utilization of rigid ceramic hot gas filters. Recently, NDE methods have been developed which show promise of providing information to build this confidence. Acousto-ultrasound, a totally nondestructive method, together with advanced digital signal processing, has been demonstrated to provide excellent correlation with remaining strength on new, as-produced filters, and for detecting damage in some monolithic filters when removed from service. Thermal imaging, with digital signal processing for determining through-wall thermal diffusivity, has also been demonstrated to correlate with remaining strength in both new (as-received) and in-service filters. Impact acoustic resonance using a

  17. Evaluation of alternative methods for the disinfection of toothbrushes.

    PubMed

    Komiyama, Edson Yukio; Back-Brito, Graziella Nuernberg; Balducci, Ivan; Koga-Ito, Cristiane Yumi

    2010-01-01

    The aim of this study was to evaluate alternative methods for the disinfection of toothbrushes considering that most of the previously proposed methods are expensive and cannot be easily implemented. Two-hundred toothbrushes with standardized dimensions and bristles were included in the study. The toothbrushes were divided into 20 experimental groups (n = 10), according to microorganism considered and chemical agent used. The toothbrushes were contaminated in vitro by standardized suspensions of Streptococcus mutans, Streptococcus pyogenes, Staphylococcus aureus or Candida albicans. The following disinfectants were tested: 0.12% chlorhexidine digluconate, 50% white vinegar, a triclosan-containing dentifrice solution, and a perborate-based tablet solution. The disinfection method was immersion in the disinfectant for 10 min. After the disinfection procedure, the number of remaining microbial cells was evaluated. The values of cfu/toothbrush of each group of microorganism after disinfection were compared by Kruskal-Wallis ANOVA and Dunn's test for multiple comparisons (5%). The chlorhexidine digluconate solution was the most effective disinfectant. The triclosan-based dentifrice solution promoted a significant reduction of all microorganisms' counts in relation to the control group. As to the disinfection with 50% vinegar, a significant reduction was observed for all the microorganisms, except for C. albicans. The sodium perborate solution was the less effective against the tested microorganisms. Solutions based on triclosan-containing dentifrice may be considered effective, nontoxic, cost-effective, and an easily applicable alternative for the disinfection of toothbrushes. The vinegar solution reduced the presence of S. aureus, S. mutans and S. pyogenes on toothbrushes. PMID:20339710

  18. Development of nondestructive evaluation methods for structural ceramics

    SciTech Connect

    Ellingson, W.A.; Koehl, R.D.; Stuckey, J.B.; Sun, J.G.; Engel, H.P.; Smith, R.G.

    1997-06-01

    Development of nondestructive evaluation (NDE) methods for application to fossil energy systems continues in three areas: (a) mapping axial and radial density gradients in hot gas filters, (b) characterization of the quality of continuous fiber ceramic matrix composite (CFCC) joints and (c) characterization and detection of defects in thermal barrier coatings. In this work, X-ray computed tomographic imaging was further developed and used to map variations in the axial and radial density of two full length (2.3-m) hot gas filters. The two filters differed in through wall density because of the thickness of the coating on the continuous fibers. Differences in axial and through wall density were clearly detected. Through transmission infrared imaging with a highly sensitivity focal plane array camera was used to assess joint quality in two sets of SiC/SiC CFCC joints. High frame rate data capture suggests that the infrared imaging method holds potential for the characterization of CFCC joints. Work to develop NDE methods that can be used to evaluate electron beam physical vapor deposited coatings with platinum-aluminide (Pt-Al) bonds was undertaken. Coatings of Zirconia with thicknesses of 125 {micro}m (0.005 in.), 190 {micro}m (0.0075 in.), and 254 {micro}m (0.010 in.) with a Pt-Al bond coat on Rene N5 Ni-based superalloy were studied by infrared imaging. Currently, it appears that thickness variation, as well as thermal properties, can be assessed by infrared technology.

  19. Methods for evaluating cervical range of motion in trauma settings.

    PubMed

    Voss, Sarah; Page, Michael; Benger, Jonathan

    2012-08-02

    Immobilisation of the cervical spine is a common procedure following traumatic injury. This is often precautionary as the actual incidence of spinal injury is low. Nonetheless, stabilisation of the head and neck is an important part of pre-hospital care due to the catastrophic damage that may follow if further unrestricted movement occurs in the presence of an unstable spinal injury. Currently available collars are limited by the potential for inadequate immobilisation and complications caused by pressure on the patient's skin, restricted airway access and compression of the jugular vein. Alternative approaches to cervical spine immobilisation are being considered, and the investigation of these new methods requires a standardised approach to the evaluation of neck movement. This review summarises the research methods and scientific technology that have been used to assess and measure cervical range of motion, and which are likely to underpin future research in this field. A systematic search of international literature was conducted to evaluate the methodologies used to assess the extremes of movement that can be achieved in six domains. 34 papers were included in the review. These studies used a range of methodologies, but study quality was generally low. Laboratory investigations and biomechanical studies have gradually given way to methods that more accurately reflect the real-life situations in which cervical spine immobilisation occurs. Latterly, new approaches using virtual reality and simulation have been developed. Coupled with modern electromagnetic tracking technology this has considerable potential for effective application in future research. However, use of these technologies in real life settings can be problematic and more research is needed.

  20. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  1. Numerical Weather Predictions Evaluation Using Spatial Verification Methods

    NASA Astrophysics Data System (ADS)

    Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain-­-Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is co­financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-­-2013).

  2. An evaluation of teaching methods in the introductory physics classroom

    NASA Astrophysics Data System (ADS)

    Savage, Lauren Michelle Williams

    The introductory physics mechanics course at the University of North Carolina at Charlotte has a history of relatively high DFW rates. In 2011, the course was redesigned from the traditional lecture format to the inverted classroom format (flipped). This format inverts the classroom by introducing material in a video assigned as homework while the instructor conducts problem solving activities and guides discussions during the regular meetings. This format focuses on student-centered learning and is more interactive and engaging. To evaluate the effectiveness of the new method, final exam data over the past 10 years was mined and the pass rates examined. A normalization condition was developed to evaluate semesters equally. The two teaching methods were compared using a grade distribution across multiple semesters. Students in the inverted class outperformed those in the traditional class: "A"s increased by 22% and "B"s increased by 38%. The final exam pass rate increased by 12% under the inverted classroom approach. The same analysis was used to compare the written and online final exam formats. Surprisingly, no students scored "A"s on the online final. However, the percent of "B"s increased by 136%. Combining documented best practices from a literature review with personal observations of student performance and attitudes from first hand classroom experience as a teaching assistant in both teaching methods, reasons are given to support the continued use of the inverted classroom approach as well as the online final. Finally, specific recommendations are given to improve the course structure where weaknesses have been identified.

  3. Rapid visualization of global image and dem based on SDOG-ESSG

    NASA Astrophysics Data System (ADS)

    Bo, H. G.; Wu, L. X.; Yu, J. Q.; Yang, Y. Z.; Xie, L.

    2013-10-01

    Due to the limit of the two-dimension and small scale issues, it's impossible for the conventional planar and spherical global spatial grid to provide a unified real three-dimensional (3D) data model for Earth System Science research. The surface of the Earth is an important interface between lithosphere and atmosphere. Usually, the terrain should be added into the model in global changes and tectonic plates movement researches. However, both atmosphere and lithosphere are typical objects of three-dimension. Thus, it is necessary to represent and visualize the terrain in a real 3D mode. Spheroid Degenerated Octree Grid based Earth System Spatial Grid (SDOG-ESSG) not only solve the problem small-scale issues limited, but also solve the problem of two-dimension issues oriented. It can be used as real 3D model to represent and visualize the global image and DEM. Owing to the complex spatial structure of SDOG-ESSG, the visual efficiency of spatial data based on SDOG-ESSG is very low. Methods of layers and blocks data organization, as well as data culling, Level of Detail (LOD), and asynchronous scheduling, were adopted in this article to improve the efficiency of visualization. Finally, a prototype was developed for the quick visualization of global DEM and image based SDOG-ESSG.

  4. Open-source MFIX-DEM software for gas-solids flows: Part II Validation studies

    SciTech Connect

    Li, Tingwen; Garg, Rahul; Galvin, Janine; Pannala, Sreekanth

    2012-01-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  5. Open-Source MFIX-DEM Software for Gas-Solids Flows: Part II - Validation Studies

    SciTech Connect

    Li, Tingwen

    2012-04-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas–solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  6. Region-growing segmentation to automatically delimit synthetic drumlins in 'real' DEMs

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Smith, Mike; Hillier, John

    2013-04-01

    Mapping or 'delimiting' landforms is one of geomorphology's primary tools. Computer-based techniques, such as terrain segmentation, may potentially provide terrain units that are close to the size and shape of landforms. Whether terrain units represent landforms heavily depends on the segmentation algorithm, its settings and the type of underlying land-surface parameters (LSPs). We assess a widely used region-growing technique, i.e. the multiresolution segmentation (MRS) algorithm as implemented in object-based image analysis software, for delimiting drumlins. Supervised testing was based on five synthetic DEMs that included the same set of perfectly known drumlins at different locations. This, for the first time, removes subjectivity from the reference data. Five LSPs were tested, and four variants were computed for each using two pre- and post-processing options. The automated method (1) employs MRS to partition the input LSP into 200 ever coarser terrain unit patterns, (2) identifies the spatially best matching terrain unit for each reference drumlin, and (3) computes four accuracy metrics for quantifying the aerial match between delimited and reference drumlins. MRS performed best on LSPs that are regional, derived from a decluttered DEM and then normalized. Median scale parameters (SPs) for segments best delineating drumlins were relatively stable for the same LSP, but varied significantly between LSPs. Larger drumlins were generally delimited at higher SPs. MRS indicated high robustness against variations in the location and distribution of drumlins.

  7. A global database of volcano edifice morphometry using SRTM DEMs

    NASA Astrophysics Data System (ADS)

    Grosse, P.; van Wyk de Vries, B.; Petrinovic, I. A.; Euillades, P. A.

    2009-12-01

    The morphometry of volcanic edifices reflects the aggradational and degradational processes that interact during their evolution. In association with VOGRIPA, a global risk identification project, we are currently constructing a database on the morphometry of volcanic edifices using digital elevation models (DEMs) from the Shuttle Radar Topography Mission (SRTM). Our aim is to compile and make available a global database of morphometric parameters that characterize the shape and size of volcanic edifices. The 90-meter SRTM DEM is presently the best public-access DEM dataset for this task because of its near-global coverage and spatial resolution that is high enough for the analysis of composite volcanic edifices. The Smithsonian Institution database lists 1536 active/potentially active volcanoes worldwide. Of these, ~900 volcano edifices can be analyzed with the SRTM DEMs, discarding volcanoes not covered by the dataset above latitudes 60°N and 56°S, submarine volcanoes, volcanoes with mostly negative topographies (i.e. calderas, maars) and monogenetic cones and domes, which are too small to accurately study with the 90-meter resolution. Morphometric parameters are acquired using an expressly written IDL-language code named MORVOLC. Edifice outline is determined via a semi-automated algorithm that identifies slope-breaks between user-estimated maximum and minimum outlines. Thus, volcanic edifices as topographic entities are considered, excluding aprons or ring plains and other far-reaching volcanic products. Several morphometric parameters are computed which characterize edifice size and shape. Size parameters are height (from base to summit), volume, base and summit areas and widths (average, minimum, maximum). Plan shape is summarized using two independent dimensionless indexes that describe the shape of the elevation contours, ellipticity (quantifies the elongation of each contour) and irregularity (quantifies the irregularity or complexity of each contour

  8. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  9. Development of nondestructive evaluation methods for structural ceramics.

    SciTech Connect

    Ellingson, W. A.

    1998-08-19

    During the past year, the focus of our work on nondestructive evaluation (NDE) methods was on the development and application of these methods to technologies such as ceramic matrix composite (CMC) hot-gas filters, CMC high-temperature heat exchangers, and CMC ceramic/ceramic joining. Such technologies are critical to the ''Vision 21 Energy-Plex Fleet'' of modular, high-efficiency, low-emission power systems. Specifically, our NDE work has continued toward faster, higher sensitivity, volumetric X-ray computed tomographic imaging with new amorphous silicon detectors to detect and measure axial and radial density variations in hot-gas filters and heat exchangers; explored the potential use of high-speed focal-plane-array infrared imaging technology to detect delaminations and variations in the thermal properties of SiC/SiC heat exchangers; and explored various NDE methods to characterize CMC joints in cooperation with various industrial partners. Work this year also addressed support of Southern Companies Services Inc., Power Systems Development Facility, where NDE is needed to assess the condition of hot-gas candle filters. This paper presents the results of these efforts.

  10. Evaluation of recovery methods to detect coliforms in water.

    PubMed

    Bissonnette, G K; Jezeski, J J; McFeters, G A; Stuart, D G

    1977-03-01

    Various recovery methods used to detect coliforms in water were evaluated by applying the membrane filter chamber technique. The membrane filter chambers, containing pure-culture suspensions of Escherichia coli or natural suspensions of raw sewage, were immersed in the stream environment. Samples were withdrawn from the chamber at regular time intervals and enumerated by several detection methods. In general, multiple-tube fermentation techniques gave better recovery than plating or membrane filtration procedures. The least efficient method of recovery resulted when using membrane filtration procedures, especially as the exposure period of the organisms to the stream environment increased. A 2-h enrichment on a rich, nonselective medium before exposure to selective media improved the recovery of fecal coliforms with membrane filtration techniques. Substantially enhanced recoveries of E. coli from pure-culture suspensions and of fecal coliforms from raw-sewage suspensions were observed when compared with recoveries obtained by direct primary exposure to selective media. Such an enrichment period appears to provide a nontoxic environment for the gradual adjustment and repair of injured cells.

  11. Evaluation of new aquatic toxicity test methods for oil dispersants

    SciTech Connect

    Pace, C.B.; Clark, J.R.; Bragin, G.E.

    1994-12-31

    Current aquatic toxicity test methods used for dispersant registration do not address real world exposure scenarios. Current test methods require 48 or 96 hour constant exposure conditions. In contrast, environmentally realistic exposures can be described as a pulse in which the initial concentration declines over time. Recent research using a specially designed testing apparatus (the California system) has demonstrated that exposure to Corexit 9527{reg_sign} under pulsed exposure conditions may be 3 to 22 times less toxic compared to continuous exposure scenarios. The objectives of this study were to compare results of toxicity tests using the California test system to results from standardized tests, evaluate sensitivity of regional (Holmesimysis cast and Atherinops affinis) vs. standard test species (Mysidopsis bahia and Menidia beryllina) and determine if tests using the California test system and method are reproducible. All tests were conducted using Corexit 9527{reg_sign} as the test material. Standard toxicity tests conducted with M. bahia and H. cast resulted in LC50s similar to those from tests using the California apparatus. LC50s from tests conducted in the authors` laboratory with the California system and standard test species were within a factor of 2 to 6 of data previously reported for west coast species. Results of tests conducted with H. cast in the laboratory compared favorably to data reported by Singer et al. 1991.

  12. Full-waveform and discrete-return lidar in salt marsh environments: An assessment of biophysical parameters, vertical uncertatinty, and nonparametric dem correction

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.

    High-resolution and high-accuracy elevation data sets of coastal salt marsh environments are necessary to support restoration and other management initiatives, such as adaptation to sea level rise. Lidar (light detection and ranging) data may serve this need by enabling efficient acquisition of detailed elevation data from an airborne platform. However, previous research has revealed that lidar data tend to have lower vertical accuracy (i.e., greater uncertainty) in salt marshes than in other environments. The increase in vertical uncertainty in lidar data of salt marshes can be attributed primarily to low, dense-growing salt marsh vegetation. Unfortunately, this increased vertical uncertainty often renders lidar-derived digital elevation models (DEM) ineffective for analysis of topographic features controlling tidal inundation frequency and ecology. This study aims to address these challenges by providing a detailed assessment of the factors influencing lidar-derived elevation uncertainty in marshes. The information gained from this assessment is then used to: 1) test the ability to predict marsh vegetation biophysical parameters from lidar-derived metrics, and 2) develop a method for improving salt marsh DEM accuracy. Discrete-return and full-waveform lidar, along with RTK GNSS (Real-time Kinematic Global Navigation Satellite System) reference data, were acquired for four salt marsh systems characterized by four major taxa (Spartina alterniflora, Spartina patens, Distichlis spicata, and Salicornia spp.) on Cape Cod, Massachusetts. These data were used to: 1) develop an innovative combination of full-waveform lidar and field methods to assess the vertical distribution of aboveground biomass as well as its light blocking properties; 2) investigate lidar elevation bias and standard deviation using varying interpolation and filtering methods; 3) evaluate the effects of seasonality (temporal differences between peak growth and senescent conditions) using lidar data

  13. Development of New Accurate, High Resolution DEMs and Merged Topographic-Bathymetric Grids for Inundation Mapping in Seward Alaska

    NASA Astrophysics Data System (ADS)

    Marriott, D.; Suleimani, E.; Hansen, R.

    2004-05-01

    The Geophysical Institute of the University of Alaska Fairbanks and the Alaska Division of Geological and Geophysical Surveys continue to participate in the National Tsunami Hazard Mitigation Program by evaluating and mapping potential inundation of selected coastal communities in Alaska. Seward, the next Alaskan community to be mapped, has excellent bathymetric data but very poor topographic data available. Since one of the most significant sources of errors in tsunami inundation mapping is inaccuracy of topographic and bathymetric data, the Alaska Tsunami Modeling Team cooperated with the local USGS glaciology office to perform photogrammetry in the Seward area to produce a new DEM. Using ten air photos and the APEX photogrammetry and analysis software, along with several precisely located GPS points, we developed a new georeferenced and highly accurate DEM with a 5-meter grid spacing. A variety of techniques were used to remove the effects of buildings and trees to yield a bald earth model. Finally, we resampled the new DEM to match the finest resolution model grid, and combined it with all other data, using the most recent and accurate data in each region. The new dataset has contours that deviate by more than 100 meters in some places from the contours in the previous dataset, showing significant improvement in accuracy for the purpose of tsunami modeling.

  14. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    PubMed Central

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267

  15. An evaluation of methods for scaling aircraft noise perception

    NASA Technical Reports Server (NTRS)

    Ollerhead, J. B.

    1971-01-01

    One hundred and twenty recorded sounds, including jets, turboprops, piston engined aircraft and helicopters were rated by a panel of subjects in a paired comparison test. The results were analyzed to evaluate a number of noise rating procedures in terms of their ability to accurately estimate both relative and absolute perceived noise levels. It was found that the complex procedures developed by Stevens, Zwicker and Kryter are superior to other scales. The main advantage of these methods over the more convenient weighted sound pressure level scales lies in their ability to cope with signals over a wide range of bandwidth. However, Stevens' loudness level scale and the perceived noise level scale both overestimate the growth of perceived level with intensity because of an apparent deficiency in the band level summation rule. A simple correction is proposed which will enable these scales to properly account for the experimental observations.

  16. Evaluating the optimal Norwood deepening method in the Antrim Shale

    SciTech Connect

    Frantz, J.H. Jr.; Tatum, C.L.; Bezilla, M.; Kalnbach, B.W.; Wilkinson, J.G.

    1994-12-31

    The purpose of this paper is to present the results of a Gas Research Institute (GRI) evaluation to determine the optimal and completion technique for the Norwood Antrim Shale unit in older Antrim wells in the Michigan Basin, including the potential range of Norwood production responses. There are approximately 500 older Antrim wells not drilled through the Norwood, that could be deepened below their current Lachine unit completion. GRI performed this work because operators are uncertain of the best deepening/completion procedure, the potential productivity of the Norwood, and the appropriate well spacing for the Norwood completions. In this paper, the authors show the results of actual field case histories and simulates performance projections to determine the optimal Norwood deepening method and well spacing.

  17. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    PubMed

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.

  18. Basidiomycete cryopreservation on perlite: evaluation of a new method.

    PubMed

    Homolka, Ladislav; Lisá, Ludmila; Nerud, Frantisek

    2006-06-01

    A new cryopreservation method using perlite as a carrier was evaluated on a large set of mycelial cultures of basidiomycetes. The viability and some other characteristics--growth, macro- and micromorphology, and laccase production--of 442 strains were tested after 48-h and then after 3-year storage in liquid nitrogen using a perlite protocol (PP). All (100%) of them survived successfully both 48-h storage and 3-year storage in liquid nitrogen without noticeable growth and morphological changes. Also laccase production was unchanged. The viability and laccase production of a part (250) of these strains were compared with those of the strains subjected to an original agar plug protocol (OP). Using OP, 144 strains (57.6%) out of 250 survived a 3-year storage in liquid nitrogen. The results indicate that the cryopreservation protocol used significantly influences survival of the strains. Markedly better results were achieved using the PP.

  19. Evaluation of heart rate changes: electrocardiographic versus photoplethysmographic methods

    NASA Technical Reports Server (NTRS)

    Low, P. A.; Opfer-Gehrking, T. L.; Zimmerman, I. R.; O'Brien, P. C.

    1997-01-01

    The heart rate (HR) variation to forced deep breathing (HRDB) and to the Valsalva maneuver (Valsalva ratio; VR) are the two most widely used tests of cardiovagal function in human subjects. The HR is derived from a continuously running electrocardiographic (ECG) recording. Recently, HR derived from the arterial waveform became available on the Finapres device (FinapHR), but its ability to detect rapid changes in HR remains uncertain. We therefore evaluated HRDB and VR derived from FinapHR using ECG-derived HR (ECGHR) recordings as the standard. We also compared the averaged HR on Finapres (Finapav) with beat-to-beat Finapres (FinapBB) values. Studies were undertaken in 12 subjects with large HR variations: age, 34.5 +/- 9.3 (SD) years; six males and six females. FinapBB values were superimposable upon ECGHR for both HRDB and VR. In contrast, Finapav failed to follow ECGHR for HRDB and followed HRECG with a lag for the VR. To evaluate statistically how closely FinapHR approximated ECGHR, we undertook regression analysis, using mean values for each subject. To compare the two methods, we evaluated the significance of the difference between test and standard values. For HRDB, FinapBB reproducibly recorded HR (R2 = 0.998), and was significantly (p = 0.001) better than Finapav (R2 = 0.616; p < 0.001). For VR, HRBB generated a VR that was not significantly different from the correct values, while HRav generated a value that was slightly but consistently lower than the correct values (p < 0.001). We conclude that FinapHR reliably records HR variations in the beat-to-beat mode for cardiovascular HR tests.

  20. A method for the evaluation of wide dynamic range cameras

    NASA Astrophysics Data System (ADS)

    Wong, Ping Wah; Lu, Yu Hua

    2012-01-01

    We propose a multi-component metric for the evaluation of digital or video cameras under wide dynamic range (WDR) scenes. The method is based on a single image capture using a specifically designed WDR test chart and light box. Test patterns on the WDR test chart include gray ramps, color patches, arrays of gray patches, white bars, and a relatively dark gray background. The WDR test chart is professionally made using 3 layers of transparencies to produce a contrast ratio of approximately 110 dB for WDR testing. A light box is designed to provide a uniform surface with light level at about 80K to 100K lux, which is typical of a sunny outdoor scene. From a captured image, 9 image quality component scores are calculated. The components include number of resolvable gray steps, dynamic range, linearity of tone response, grayness of gray ramp, number of distinguishable color patches, smearing resistance, edge contrast, grid clarity, and weighted signal-to-noise ratio. A composite score is calculated from the 9 component scores to reflect the comprehensive image quality in cameras under WDR scenes. Experimental results have demonstrated that the multi-component metric corresponds very well to subjective evaluation of wide dynamic range behavior of cameras.

  1. Study Methods to Characterize and Implement Thermography Nondestructive Evaluation (NDE)

    NASA Technical Reports Server (NTRS)

    Walker, James L.

    1998-01-01

    The limits and conditions under which an infrared thermographic nondestructive evaluation can be utilized to assess the quality of aerospace hardware is demonstrated in this research effort. The primary focus of this work is on applying thermography to the inspection of advanced composite structures such as would be found in the International Space Station Instrumentation Racks, Space Shuttle Cargo Bay Doors, Bantam RP-1 tank or RSRM Nose Cone. Here, the detection of delamination, disbond, inclusion and porosity type defects are of primary interest. In addition to composites, an extensive research effort has been initiated to determine how well a thermographic evaluation can detect leaks and disbonds in pressurized metallic systems "i.e. the Space Shuttle Main Engine Nozzles". In either case, research into developing practical inspection procedures was conducted and thermographic inspections were performed on a myriad of test samples, subscale demonstration articles and "simulated" flight hardware. All test samples were fabricated as close to their respective structural counterparts as possible except with intentional defects for NDE qualification. As an added benefit of this effort to create simulated defects, methods were devised for defect fabrication that may be useful in future NDE qualification ventures.

  2. An effective method for incoherent scattering radar's detecting ability evaluation

    NASA Astrophysics Data System (ADS)

    Lu, Ziqing; Yao, Ming; Deng, Xiaohua

    2016-06-01

    Ionospheric incoherent scatter radar (ISR), which is used to detect ionospheric electrons and ions, generally, has megawatt class transmission power and hundred meter level antenna aperture. The crucial purpose of this detecting technology is to get ionospheric parameters by acquiring the autocorrelation function and power spectrum of the target ionospheric plasma echoes. Whereas the ISR's echoes are very weak because of the small radar cross section of its target, estimating detecting ability will be significantly instructive and meaningful for ISR system design. In this paper, we evaluate the detecting ability through signal-to-noise ratio (SNR). The soft-target radar equation is deduced to be applicable to ISR, through which we use data from International Reference Ionosphere model to simulate signal-to-noise ratio (SNR) of echoes, and then comparing the measured SNR from European Incoherent Scatter Scientific Association and Advanced Modular Incoherent Scatter Radar with the simulation. The simulation results show good consistency with the measured SNR. For ISR, the topic of this paper is the first comparison between the calculated SNR and radar measurements; the detecting ability can be improved through increasing SNR. The effective method for ISR's detecting ability evaluation provides basis for design of radar system.

  3. Gaussian beam profile shaping apparatus, method therefor and evaluation thereof

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.; Romero, Louis A.

    1999-01-01

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system.

  4. Gaussian beam profile shaping apparatus, method therefore and evaluation thereof

    DOEpatents

    Dickey, F.M.; Holswade, S.C.; Romero, L.A.

    1999-01-26

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system. 27 figs.

  5. Direct micromechanics derivation and DEM confirmation of the elastic moduli of isotropic particulate materials: Part I No particle rotation

    NASA Astrophysics Data System (ADS)

    Fleischmann, J. A.; Drugan, W. J.; Plesha, M. E.

    2013-07-01

    We derive the macroscopic elastic moduli of a statistically isotropic particulate aggregate material via the homogenization methods of Voigt (1928) (kinematic hypothesis), Reuss (1929) (static hypothesis), and Hershey (1954) and Kröner (1958) (self-consistent hypothesis), originally developed to treat crystalline materials, from the directionally averaged elastic moduli of three regular cubic packings of uniform spheres. We determine analytical expressions for these macroscopic elastic moduli in terms of the (linearized) elastic inter-particle contact stiffnesses on the microscale under the three homogenization assumptions for the three cubic packings (simple, body-centered, and face-centered), assuming no particle rotation. To test these results and those in the literature, we perform numerical simulations using the discrete element method (DEM) to measure the overall elastic moduli of large samples of randomly packed uniform spheres with constant normal and tangential contact stiffnesses (linear spring model). The beauty of DEM is that simulations can be run with particle rotation either prohibited or unrestrained. In this first part of our two-part series of papers, we perform DEM simulations with particle rotation prohibited, and we compare these results with our theoretical results that assumed no particle rotation. We show that the self-consistent homogenization assumption applied to the locally body-centered cubic (BCC) packing most accurately predicts the measured values of the overall elastic moduli obtained from the DEM simulations, in particular Poisson's ratio. Our new analytical self-consistent results lead to significantly better predictions of Poisson's ratio than all prior published theoretical results. Moreover, our results are based on a direct micromechanics analysis of specific geometrical packings of uniform spheres, in contrast to all prior theoretical analyses, which were based on difficult-to-verify hypotheses involving overall inter

  6. Precise Determination of the Baseline Between the TerraSAR-X and TanDEM-X Satellites

    NASA Astrophysics Data System (ADS)

    Koenig, Rolf; Rothacher, Markus; Michalak, Grzegorz; Moon, Yongjin

    TerraSAR-X, launched on June 15, 2007, and TanDEM-X, to be launched in September 2009, both carry the Tracking, Occultation and Ranging (TOR) category A payload instrument package. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), for precise orbit determination and atmospheric sounding and a Laser retro-reflector (LRR) serving as target for the global Satellite Laser Ranging (SLR) ground station network. The TOR is supplied by the GeoForschungsZentrum Potsdam (GFZ) Germany, and the Center for Space Research (CSR), Austin, Texas. The objective of the German/US collaboration is twofold: provision of atmospheric profiles for use in numerical weather predictions and climate studies from the occultation data and precision SAR data processing based on precise orbits and atmospheric products. For the scientific objectives of the TanDEM- X mission, i.e., bi-static SAR together with TerraSAR-X, the dual-frequency GPS receiver is of vital importance for the millimeter level determination of the baseline or distance between the two spacecrafts. The paper discusses the feasibility of generating millimeter baselines by the example of GRACE, where for validation the distance between the two GRACE satellites is directly available from the micrometer-level intersatellite link measurements. The distance of the GRACE satellites is some 200 km, the distance of the TerraSAR-X/TanDEM-X formation will be some 200 meters. Therefore the proposed approach is then subject to a simulation of the foreseen TerraSAR-X/TanDEM-X formation. The effect of varying space environmental conditions, of possible phase center variations, multi path, and of varying center of mass of the spacecrafts are evaluated and discussed.

  7. An Evaluation of Installation Methods for STS-1 Seismometers

    USGS Publications Warehouse

    Holcomb, L. Gary; Hutt, Charles R.

    1992-01-01

    INTRODUCTION This report documents the results of a series of experiments conducted by the authors at the Albuquerque Seismological Laboratory (ASl) during the spring and summer of 1991; the object of these experiments was to obtain and document quantitative performance comparisons of three methods of installing STS-1 seismometers. Historically, ASL has installed STS-1 sensors by cementing their thick glass base plates to the concrete floor of the vault (see Peterson and Tilgner, 1985, p 44 and Figure 31, p 51 for the details of this installation technique). This installation technique proved to be fairly satisfactory for the China Digital Seismic Network and for several sets of STS-1 sensors installed in other locations since that time. However, the cementing operation is rather labor intensive and the concrete requires a lengthy (about 1 week) curing time during which the sensor installed on it is noisy. In addition it is difficult to assure that all air bubbles have been removed from the interface between the cement and the glass base plate. If air bubbles are present beneath the plate, horizontal sensors can be unacceptably noisy. Moving a sensor installed in this manner requires the purchase of a new glass base plate because the old plate normally can not be removed without breakage. Therefore, this study was undertaken with the aim of developing an improved method of installing STS-1's. The goals were to develop a method which requires less field site labor during the installation and assures a higher quality installation when finished. In addition, the improved installation technique should promote portability. Two alternate installation techniques were evaluated in this study. One method replaces the cement between the base plate and the vault floor with sand. This method has been used in the French Geoscope program and in several IRIS/IDA installations made by the University of California at San Diego (UCSD) and possibly others. It is easily implemented in

  8. DEM-based Approaches for the Identification of Flood Prone Areas

    NASA Astrophysics Data System (ADS)

    Samela, Caterina; Manfreda, Salvatore; Nardi, Fernando; Grimaldi, Salvatore; Roth, Giorgio; Sole, Aurelia

    2013-04-01

    The remarkable number of inundations that caused, in the last decades, thousands of deaths and huge economic losses, testifies the extreme vulnerability of many Countries to the flood hazard. As a matter of fact, human activities are often developed in the floodplains, creating conditions of extremely high risk. Terrain morphology plays an important role in understanding, modelling and analyzing the hydraulic behaviour of flood waves. Research during the last 10 years has shown that the delineation of flood prone areas can be carried out using fast methods that relay on basin geomorphologic features. In fact, the availability of new technologies to measure surface elevation (e.g., GPS, SAR, SAR interferometry, RADAR and LASER altimetry) has given a strong impulse to the development of Digital Elevation Models (DEMs) based approaches. The identification of the dominant topographic controls on the flood inundation process is a critical research question that we try to tackle with a comparative analysis of several techniques. We reviewed four different approaches for the morphological characterization of a river basin with the aim to provide a description of their performances and to identify their range of applicability. In particular, we explored the potential of the following tools. 1) The hydrogeomorphic method proposed by Nardi et al. (2006) which defines the flood prone areas according to the water level in the river network through the hydrogeomorphic theory. 2) The linear binary classifier proposed by Degiorgis et al. (2012) which allows distinguishing flood-prone areas using two features related to the location of the site under exam with respect to the nearest hazard source. The two features, proposed in the study, are the length of the path that hydrologically connects the location under exam to the nearest element of the drainage network and the difference in elevation between the cell under exam and the final point of the same path. 3) The method by

  9. A seamless, high-resolution digital elevation model (DEM) of the north-central California coast

    USGS Publications Warehouse

    Foxgrover, Amy C.; Barnard, Patrick L.

    2012-01-01

    A seamless, 2-meter resolution digital elevation model (DEM) of the north-central California coast has been created from the most recent high-resolution bathymetric and topographic datasets available. The DEM extends approximately 150 kilometers along the California coastline, from Half Moon Bay north to Bodega Head. Coverage extends inland to an elevation of +20 meters and offshore to at least the 3 nautical mile limit of state waters. This report describes the procedures of DEM construction, details the input data sources, and provides the DEM for download in both ESRI Arc ASCII and GeoTIFF file formats with accompanying metadata.

  10. Comparison of methods for evaluation of experimentally induced emphysema

    SciTech Connect

    Busch, R.H.; Buschbom, R.L.; Smith, L.G.

    1984-04-01

    Four methods to quantify induced emphysema, in a manner economically applicable to large numbers of animals, are compared by correlation analyses. Lung tissue used was from rats pretreated intratracheally with elastase or saline prior to exposure to air or (NH/sub 4/)/sub 2/SO/sub 4/ or NH/sub 4/NO/sub 3/ aerosols. The most sensitive quantitative evaluation was from mean chord length (MCL) measurements on scanning electron micrographs (SEM). Four-corner and parallel-line grids provided similar results, and reducing sample size to one selected field per lobe yielded a high degree of reliability for MCL measurements. Alveolar-pore perimeter and area (also measured on SEM photographs) were increased by induced emphysema, but were not reliable indicators for degree of pulmonary involvement. Both subjective score (grading the degree of emphysema) and percentage-area-affected determinations indicated the presence of emphysema, but with less sensitivity than MCL measurements. However, these two subgross methods (performed with a dissecting microscope) provided valuable information on the distribution of pulmonary lesions; emphysema was induced in a nonuniform but consistent and progressive pattern in the two lobes of the lung studied. 23 studied.

  11. Nondestructive Evaluation Methods for the Ares I Common Bulkhead

    NASA Technical Reports Server (NTRS)

    Walker, James

    2010-01-01

    A large scale bonding demonstration test article was fabricated to prove out manufacturing techniques for the current design of the NASA Ares I Upper Stage common bulkhead. The common bulkhead serves as the single interface between the liquid hydrogen and liquid oxygen portions of the Upper Stage propellant tank. The bulkhead consists of spin-formed aluminum domes friction stir welded to Y-rings and bonded to a perforated phenolic honeycomb core. Nondestructive evaluation methods are being developed for assessing core integrity and the core-to-dome bond line of the common bulkhead. Detection of manufacturing defects such as delaminations between the core and face sheets as well as service life defects such as crushed or sheared core resulting from impact loading are all of interest. The focus of this work will be on the application of thermographic, shearographic, and phased array ultrasonic methods to the bonding demonstration article as well as various smaller test panels featuring design specific defect types and geometric features.

  12. Study on measurement and evaluation methods for superconductive characteristics

    NASA Astrophysics Data System (ADS)

    Wada, Hitoshi; Ito, Kikuo; Kuroda, Tsuneo; Yuyama, Michiya; Goodrich, L. F.; Bray, S.; Ekin, J. W.; Goldfarb, R.

    1993-01-01

    Research was conducted jointly with the National Institute of Standards and Technology (USA) regarding test and evaluation methods for superconductors. The main themes were as follows: (1) measurement of the critical current of oxidized superconductors; (2) measurement of alternating current loss in advanced metal superconductors; and (3) development of software programs for measurement automation. For the measurement of the critical current of oxidized superconductors, the pulse current method was improved in order to have the accuracy necessary to deal with the problem of heat generation at electrodes. As a result of the technique developed, highly accurate measurement of critical current was attained even at silver pasted electrodes. In the next experiments, the transverse stress of Nb3Al wires was measured. Attention was paid because it is expected to be the next superconductor to be used in the future after Nb3Sn. It was clarified that the critical current degradation due to axial and transverse stress in Nb3Al wires is extremely smaller than that in Nb3Sn wires. Also, the values obtained in the experiment suggested that there was little difference between axial and transverse stress dependency in the critical current of Nb3Al superconductive filaments.

  13. Physical methods for evaluating the nutrition status of hemodialysis patients.

    PubMed

    Marcelli, Daniele; Wabel, Peter; Wieskotten, Sebastian; Ciotola, Annalisa; Grassmann, Aileen; Di Benedetto, Attilio; Canaud, Bernard

    2015-10-01

    This article aims to provide an overview of the different nutritional markers and the available methodologies for the physical assessment of nutrition status in hemodialysis patients, with special emphasis on early detection of protein energy wasting (PEW). Nutrition status assessment is made on the basis of anamnesis, physical examination, evaluation of nutrient intake, and on a selection of various screening/diagnostic methodologies. These methodologies can be subjective, e.g. the Subjective Global Assessment score (SGA), or objective in nature (e.g. bioimpedance analysis). In addition, certain biochemical tests may be employed (e.g. albumin, pre-albumin). The various subjective-based and objective methodologies provide different insights for the assessment of PEW, particularly regarding their propensity to differentiate between the important body composition compartments-fluid overload, fat mass and muscle mass. This review of currently available methods showed that no single approach and no single marker is able to detect alterations in nutrition status in a timely fashion and to follow such changes over time. The most clinically relevant approach presently appears to be the combination of the SGA method with the bioimpedance spectroscopy technique with physiological model and, additionally, laboratory tests for the detection of micro-nutrient deficiency.

  14. Evaluation of field methods for vertical high resolution aquifer characterization

    NASA Astrophysics Data System (ADS)

    Vienken, T.; Tinter, M.; Rogiers, B.; Leven, C.; Dietrich, P.

    2012-12-01

    The delineation and characterization of subsurface (hydro)-stratigraphic structures is one of the challenging tasks of hydrogeological site investigations. The knowledge about the spatial distribution of soil specific properties and hydraulic conductivity (K) is the prerequisite for understanding flow and fluid transport processes. This is especially true for heterogeneous unconsolidated sedimentary deposits with a complex sedimentary architecture. One commonly used approach to investigate and characterize sediment heterogeneity is soil sampling and lab analyses, e.g. grain size distribution. Tests conducted on 108 samples show that calculation of K based on grain size distribution is not suitable for high resolution aquifer characterization of highly heterogeneous sediments due to sampling effects and large differences of calculated K values between applied formulas (Vienken & Dietrich 2011). Therefore, extensive tests were conducted at two test sites under different geological conditions to evaluate the performance of innovative Direct Push (DP) based approaches for the vertical high resolution determination of K. Different DP based sensor probes for the in-situ subsurface characterization based on electrical, hydraulic, and textural soil properties were used to obtain high resolution vertical profiles. The applied DP based tools proved to be a suitable and efficient alternative to traditional approaches. Despite resolution differences, all of the applied methods captured the main aquifer structure. Correlation of the DP based K estimates and proxies with DP based slug tests show that it is possible to describe the aquifer hydraulic structure on less than a meter scale by combining DP slug test data and continuous DP measurements. Even though correlations are site specific and appropriate DP tools must be chosen, DP is reliable and efficient alternative for characterizing even strongly heterogeneous sites with complex structured sedimentary aquifers (Vienken et

  15. Evaluation of different field methods for measuring soil water infiltration

    NASA Astrophysics Data System (ADS)

    Pla-Sentís, Ildefonso; Fonseca, Francisco

    2010-05-01

    Soil infiltrability, together with rainfall characteristics, is the most important hydrological parameter for the evaluation and diagnosis of the soil water balance and soil moisture regime. Those balances and regimes are the main regulating factors of the on site water supply to plants and other soil organisms and of other important processes like runoff, surface and mass erosion, drainage, etc, affecting sedimentation, flooding, soil and water pollution, water supply for different purposes (population, agriculture, industries, hydroelectricity), etc. Therefore the direct measurement of water infiltration rates or its indirect deduction from other soil characteristics or properties has become indispensable for the evaluation and modelling of the previously mentioned processes. Indirect deductions from other soil characteristics measured under laboratory conditions in the same soils, or in other soils, through the so called "pedo-transfer" functions, have demonstrated to be of limited value in most of the cases. Direct "in situ" field evaluations have to be preferred in any case. In this contribution we present the results of past experiences in the measurement of soil water infiltration rates in many different soils and land conditions, and their use for deducing soil water balances under variable climates. There are also presented and discussed recent results obtained in comparing different methods, using double and single ring infiltrometers, rainfall simulators, and disc permeameters, of different sizes, in soils with very contrasting surface and profile characteristics and conditions, including stony soils and very sloping lands. It is concluded that there are not methods universally applicable to any soil and land condition, and that in many cases the results are significantly influenced by the way we use a particular method or instrument, and by the alterations in the soil conditions by the land management, but also due to the manipulation of the surface

  16. Mechanistic Based DEM Simulation of Particle Attrition in a Jet Cup

    SciTech Connect

    Xu, Wei; DeCroix, David; Sun, Xin

    2014-02-01

    The attrition of particles is a major industrial concern in many fluidization systems as it can have undesired effects on the product quality and on the reliable operation of process equipment. Therefore, to accomodate the screening and selection of catalysts for a specific process in fluidized beds, risers, or cyclone applications, their attrition propensity is usually estimated through jet cup attrition testing, where the test material is subjected to high gas velocities in a jet cup. However, this method is far from perfect despite its popularity, largely due to its inconsistency in different testing set-ups. In order to better understand the jet cup testing results as well as their sensitivity to different operating conditions, a coupled computational fluid dynamic (CFD) - discrete element method (DEM) model has been developed in the current study to investigate the particle attrition in a jet cup and its dependence on various factors, e.g. jet velocity, initial particle size, particle density, and apparatus geometry.

  17. Evaluation of internal noise methods for Hotelling observer models

    SciTech Connect

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-08-15

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality.

  18. New method for evaluating high-quality fog protective coatings

    NASA Astrophysics Data System (ADS)

    Czeremuszkin, Grzegorz; Latreche, Mohamed; Mendoza-Suarez, Guillermo

    2011-05-01

    Fogging is commonly observed when humid-warm air contacts the cold surface of a transparent substrate, i.e. eyewear lenses, making the observed image blurred and hazy. To protect from fogging, the lens inner surfaces are protected with Anti-Fog coatings, which render them hydrophilic and induce water vapor condensation as a smooth, thin and invisible film, which uniformly flows down on the lens as the condensation progresses. Coatings differ in protection level, aging kinetics, and susceptibility to contamination. Some perform acceptably in limited conditions, beyond which the condensing water film becomes unstable, nonuniform, and scatters light or shows refractory distortions, both affecting the observed image. Quantifying the performance of Anti-Fog coated lenses is difficult: they may not show classical fogging and the existing testing methods, based on fog detection, are therefore inapplicable. The presented method for evaluating and quantifying AF properties is based on characterizing light scattering on lenses exposed to controlled humidity and temperature. Changes in intensity of laser light scattered at low angles (1, 2 4 and 8 degrees), observed during condensation of water on lenses, provide information on the swelling of Anti-Fog coatings, formation of uniform water film, going from an unstable to a steady state, and on the coalescence of discontinuous films. Real time observations/measurements allow for better understanding of factors controlling fogging and fog preventing phenomena. The method is especially useful in the development of new coatings for military-, sport-, and industrial protective eyewear as well as for medical and automotive applications. It allows for differentiating between coatings showing acceptable, good, and excellent performance.

  19. An evaluation of the whole effluent toxicity test method

    SciTech Connect

    Osteen, D.V.

    1999-12-17

    Whole effluent toxicity (WET) testing has become increasingly more important to the Environmental Protection Agency (EPA) and the States in the permitting of wastewater discharges from industry and municipalities. The primary purpose of the WET test is to protect aquatic life by predicting the effect of an effluent on the receiving stream. However, there are both scientific and regulatory concerns that using WET tests to regulate industrial effluents may result in either false positives and/or false negatives. In order to realistically predict the effect of an effluent on the receiving stream, the test should be as representative as possible of the conditions in the receiving stream. Studies (Rand and Petrocelli 1985) suggested several criteria for an ideal aquatic toxicity test organism, one of which is that the organism be indigenous to, or representative of, the ecosystem receiving the effluent. The other component needed in the development of a predictive test is the use of the receiving stream water or similar synthetic water as the control and dilution water in the test method. Use of an indigenous species and receiving water in the test should help reduce the variability in the method and allow the test to predict the effect of the effluent on the receiving stream. The experience with toxicity testing at the Savannah River Site (SRS) has yielded inconclusive data because of the inconsistency and unreliability of the results. The SRS contention is that the WET method in its present form does not adequately mimic actual biological/chemical conditions of the receiving streams and is neither reasonable nor accurate. This paper discusses the rationale for such a position by SRS on toxicity testing in terms of historical permitting requirements, outfall effluent test results, standard test method evaluation, scientific review of alternate test species, and concerns over the test method expressed by other organizations. This paper presents the Savannah River Site

  20. DEM generation from digital photographs using computer vision: Accuracy and application

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.

    2012-12-01

    Data for detailed digital elevation models (DEMs) are usually collected by expensive laser-based techniques, or by photogrammetric methods that require expertise and specialist software. However, recent advances in computer vision research now permit 3D models to be automatically derived from unordered collections of photographs, and offer the potential for significantly cheaper and quicker DEM production. Here, we review the advantages and limitations of this approach and, using imagery of the summit craters of Piton de la Fournaise, compare the precisions obtained with those from formal close range photogrammetry. The surface reconstruction process is based on a combination of structure-from-motion and multi-view stereo algorithms (SfM-MVS). Using multiple photographs of a scene taken from different positions with a consumer-grade camera, dense point clouds (millions of points) can be derived. Processing is carried out by automated 'reconstruction pipeline' software downloadable from the internet. Unlike traditional photogrammetric approaches, the initial reconstruction process does not require the identification of any control points or initial camera calibration and is carried out with little or no operator intervention. However, such reconstructions are initially un-scaled and un-oriented so additional software has been developed to permit georeferencing. Although this step requires the presence of some control points or features within the scene, it does not have the relatively strict image acquisition and control requirements of traditional photogrammetry. For accuracy, and to allow error analysis, georeferencing observations are made within the image set, rather than requiring feature matching within the point cloud. Application of SfM-MVS is demonstrated using images taken from a microlight aircraft over the summit of Piton de la Fournaise volcano (courtesy of B. van Wyk de Vries). 133 images, collected with a Canon EOS D60 and 20 mm fixed focus lens, were

  1. Fish Passage though Hydropower Turbines: Simulating Blade Strike using the Discrete Element Method

    SciTech Connect

    Richmond, Marshall C.; Romero Gomez, Pedro DJ

    2014-12-08

    mong the hazardous hydraulic conditions affecting anadromous and resident fish during their passage though turbine flows, two are believed to cause considerable injury and mortality: collision on moving blades and decompression. Several methods are currently available to evaluate these stressors in installed turbines, i.e. using live fish or autonomous sensor devices, and in reduced-scale physical models, i.e. registering collisions from plastic beads. However, a priori estimates with computational modeling approaches applied early in the process of turbine design can facilitate the development of fish-friendly turbines. In the present study, we evaluated the frequency of blade strike and nadir pressure environment by modeling potential fish trajectories with the Discrete Element Method (DEM) applied to fish-like composite particles. In the DEM approach, particles are subjected to realistic hydraulic conditions simulated with computational fluid dynamics (CFD), and particle-structure interactions—representing fish collisions with turbine blades—are explicitly recorded and accounted for in the calculation of particle trajectories. We conducted transient CFD simulations by setting the runner in motion and allowing for better turbulence resolution, a modeling improvement over the conventional practice of simulating the system in steady state which was also done here. While both schemes yielded comparable bulk hydraulic performance, transient conditions exhibited a visual improvement in describing flow variability. We released streamtraces (steady flow solution) and DEM particles (transient solution) at the same location from where sensor fish (SF) have been released in field studies of the modeled turbine unit. The streamtrace-based results showed a better agreement with SF data than the DEM-based nadir pressures did because the former accounted for the turbulent dispersion at the intake but the latter did not. However, the DEM-based strike frequency is more

  2. Open-source MFIX-DEM software for gas-solids flows: Part 1 - Verification studies

    SciTech Connect

    Garg, Rahul; Galvin, Janine; Li, Tingwen; Pannala, Sreekanth

    2012-04-01

    With rapid advancements in computer hardware, it is now possible to perform large simulations of granular flows using the Discrete Element Method (DEM). As a result, solids are increasingly treated in a discrete Lagrangian fashion in the gas–solids flow community. In this paper, the open-source MFIX-DEM software is described that can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles. This method is referred to as the continuum discrete method (CDM) to clearly make a distinction between the ambiguity of using a Lagrangian or Eulerian reference for either continuum or discrete formulations. This freely available CDM code for gas–solids flows can accelerate the research in computational gas–solids flows and establish a baseline that can lead to better closures for the continuum modeling (or traditionally referred to as two fluid model) of gas–solids flows. In this paper, a series of verification cases is employed which tests the different aspects of the code in a systematic fashion by exploring specific physics in gas–solids flows before exercising the fully coupled solution on simple canonical problems. It is critical to have an extensively verified code as the physics is complex with highly-nonlinear coupling, and it is difficult to ascertain the accuracy of the results without rigorous verification. These series of verification tests set the stage not only for rigorous validation studies (performed in part II of this paper) but also serve as a procedure for testing any new developments that couple continuum and discrete formulations for gas–solids flows.

  3. Open Source MFIX-DEM Software for Gas-Solids Flows: Part 1 - Verification Studies

    SciTech Connect

    Garg, Rahul; Galvin, Janine; Li, Tingwen; Pannala, Sreekanth

    2012-04-01

    With rapid advancements in computer hardware, it is now possible to perform large simulations of granular flows using the Discrete Element Method (DEM). As a result, solids are increasingly treated in a discrete Lagrangian fashion in the gas–solids flow community. In this paper, the open-source MFIX-DEM software is described that can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles. This method is referred to as the continuum discrete method (CDM) to clearly make a distinction between the ambiguity of using a Lagrangian or Eulerian reference for either continuum or discrete formulations. This freely available CDM code for gas–solids flows can accelerate the research in computational gas–solids flows and establish a baseline that can lead to better closures for the continuum modeling (or traditionally referred to as two fluid model) of gas–solids flows. In this paper, a series of verification cases is employed which tests the different aspects of the code in a systematic fashion by exploring specific physics in gas–solids flows before exercising the fully coupled solution on simple canonical problems. It is critical to have an extensively verified code as the physics is complex with highly-nonlinear coupling, and it is difficult to ascertain the accuracy of the results without rigorous verification. These series of verification tests set the stage not only for rigorous validation studies (performed in part II of this paper) but also serve as a procedure for testing any new developments that couple continuum and discrete formulations for gas–solids flows.

  4. Open-source MFIX-DEM software for gas-solids flows: Part I verification studies

    SciTech Connect

    Garg, Rahul; Galvin, Janine; Li, Tingwen; Pannala, Sreekanth

    2012-01-01

    With rapid advancements in computer hardware, it is now possible to perform large simulations of granular flows using the Discrete Element Method (DEM). As a result, solids are increasingly treated in a discrete Lagrangian fashion in the gas solids flow community. In this paper, the open-source MFIX-DEM software is described that can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles. This method is referred to as the continuum discrete method (CDM) to clearly make a distinction between the ambiguity of using a Lagrangian or Eulerian reference for either continuum or discrete formulations. This freely available CDM code for gas solids flows can accelerate the research in computational gas solids flows and establish a baseline that can lead to better closures for the continuum modeling (or traditionally referred to as two fluid model) of gas solids flows. In this paper, a series of verification cases is employed which tests the different aspects of the code in a systematic fashion by exploring specific physics in gas solids flows before exercising the fully coupled solution on simple canonical problems. It is critical to have an extensively verified code as the physics is complex with highly-nonlinear coupling, and it is difficult to ascertain the accuracy of the results without rigorous verification. These series of verification tests set the stage not only for rigorous validation studies (performed in part II of this paper) but also serve as a procedure for testing any new developments that couple continuum and discrete formulations for gas solids flows.

  5. Development of methods for evaluating toxicity to freshwater ecosystems.

    PubMed

    Girling, A E; Pascoe, D; Janssen, C R; Peither, A; Wenzel, A; Schäfer, H; Neumeier, B; Mitchell, G C; Taylor, E J; Maund, S J; Lay, J P; Jüttner, I; Crossland, N O; Stephenson, R R; Persoone, G

    2000-02-01

    This article presents a summary of a collaborative research program involving five European research groups, that was partly funded by the European Commission under its Environmental Research Program. The objective of the program was to develop aquatic toxicity tests that could be used to obtain data for inclusion at Level 2 of the Risk Evaluation Scheme for the Notification of Substances as required by the 7th Amendment to EC Directive 79/831/EEC. Currently only a very limited number of test methods have been described that can be used for this purpose and these are based on an even smaller number of test species. Tests based upon algae (Chlamydomonas reinhardi, Scenedesmus subspicatus, and Euglena gracilis), protozoa (Tetrahymena pyriformis), rotifera (Brachionus calyciflorus), crustacea (Gammarus pulex), and diptera (Chironomus riparius) were developed. The tests encompassed a range of end points and were evaluated against four reference chemicals: lindane, 3, 4-dichloroaniline (DCA), atrazine, and copper. The capacity of the tests to identify concentrations that are chronically toxic in the field was addressed by comparing the effects threshold concentrations determined in the laboratory tests with those determined for similar and/or related species and end points in stream and pond mesocosm studies. The lowest no-observed-effect concentrations (NOEC), EC(x), or LC(x) values obtained for lindane, atrazine, and copper were comparable with the lowest values obtained in the mesocosms. The lowest chronic NOEC determined for DCA using the laboratory tests was approximately 200 times higher than the lowest NOEC in the mesocosms. PMID:10648133

  6. A consensus rating method for small virus-retentive filters. II. Method evaluation.

    PubMed

    Brorson, Kurt; Lute, Scott; Haque, Mohammed; Martin, Jerold; Sato, Terry; Moroe, Ichiro; Morgan, Michael; Krishnan, Mani; Campbell, Jennifer; Genest, Paul; Parrella, Joseph; Dolan, Sherri; Martin, Susan; Tarrach, Klaus; Levy, Richard; Aranha, Hazel; Bailey, Mark; Bender, Jean; Carter, Jeff; Chen, Qi; Dowd, Chris; Jani, Raj; Jen, David; Kidd, Stanley; Meltzer, Ted; Remington, Kathryn; Rice, Iris; Romero, Cynthia; Sato, Terry; Jornitz, Maik; Sekura, Carol Marcus; Sofer, Gail; Specht, Rachel; Wojciechowski, Peter

    2008-01-01

    Virus filters are membrane-based devices that remove large viruses (e.g., retroviruses) and/or small viruses (e.g., parvoviruses) from products by a size exclusion mechanism. In 2002, the Parenteral Drug Association (PDA) organized the PDA Virus Filter Task Force to develop a common nomenclature and a standardized test method for classifying and identifying viral-retentive filters. A test method based on bacteriophage PP7 retention was chosen based on developmental studies. The detailed final consensus filter method is published in the 2008 update of PDA Technical Report 41: Virus Filtration. Here, we evaluate the method and find it to be acceptable for testing scaled-down models of small virus-retentive filters from four manufacturers. Three consecutive lots of five filter types were tested (Pegasus SV4, Viresolve NFP, Planova 20N and 15N, Virosart CPV). Each passed the criteria specified in the test method (i.e., >4 log10 PP7 retention, >90% intravenous immunoglobulin passage, and passing integrity/installation testing) and was classified as PP7-LRV4.

  7. Evaluation of aerial survey methods for Dall's sheep

    USGS Publications Warehouse

    Udevitz, M.S.; Shults, B.S.; Adams, L.G.; Kleckner, C.

    2006-01-01

    Most Dall's sheep (Ovis dalli dalli) population-monitoring efforts use intensive aerial surveys with no attempt to estimate variance or adjust for potential sightability bias. We used radiocollared sheep to assess factors that could affect sightability of Dall's sheep in standard fixed-wing and helicopter surveys and to evaluate feasibility of methods that might account for sightability bias. Work was conducted in conjunction with annual aerial surveys of Dall's sheep in the western Baird Mountains, Alaska, USA, in 2000-2003. Overall sightability was relatively high compared with other aerial wildlife surveys, with 88% of the available, marked sheep detected in our fixed-wing surveys. Total counts from helicopter surveys were not consistently larger than counts from fixed-wing surveys of the same units, and detection probabilities did not differ for the 2 aircraft types. Our results suggest that total counts from helicopter surveys cannot be used to obtain reliable estimates of detection probabilities for fixed-wing surveys. Groups containing radiocollared sheep often changed in size and composition before they could be observed by a second crew in units that were double-surveyed. Double-observer methods that require determination of which groups were detected by each observer will be infeasible unless survey procedures can be modified so that groups remain more stable between observations. Mean group sizes increased during our study period, and our logistic regression sightability model indicated that detection probabilities increased with group size. Mark-resight estimates of annual population sizes were similar to sightability-model estimates, and confidence intervals overlapped broadly. We recommend the sightability-model approach as the most effective and feasible of the alternatives we considered for monitoring Dall's sheep populations.

  8. Further Evaluation of Scaling Methods for Rotorcraft Icing

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Kreeger, Richard E.

    2012-01-01

    The paper will present experimental results from two recent icing tests in the NASA Glenn Icing Research Tunnel (IRT). The first test, conducted in February 2009, was to evaluate the current recommended scaling methods for fixed wing on representative rotor airfoils at fixed angle of attack. For this test, scaling was based on the modified Ruff method with scale velocity determined by constant Weber number and water film Weber number. Models were un-swept NACA 0012 wing sections. The reference model had a chord of 91.4 cm and scale model had a chord of 35.6 cm. Reference tests were conducted with velocity of 100 kt (52 m/s), droplet medium volume diameter (MVD) 195 m, and stagnation-point freezing fractions of 0.3 and 0.5 at angle of attack of 5deg and 7deg . It was shown that good ice shape scaling was achieved with constant Weber number for NACA 0012 airfoils with angle of attack up to 7deg . The second test, completed in May 2010, was primarily focused on obtaining transient and steady-state iced aerodynamics, ice accretion and shedding, and thermal icing validation data from an oscillating airfoil section over some selected ranges of icing conditions and blade assembly operational configurations. The model used was a 38.1-cm chord Sikorsky SC2110 airfoil section installed on an airfoil test apparatus with oscillating capability in the IRT. For two test conditions, size and condition scaling were performed. It was shown that good ice shape scaling was achieved for SC2110 airfoil at dynamic pitching motion. The data obtained will be applicable for future main rotor blade and tail rotor blade applications.

  9. Near-automatic generation of lava dome DEMs from photos

    NASA Astrophysics Data System (ADS)

    James, M. R.; Varley, N.

    2012-04-01

    Acquiring accurate digital elevation models (DEMs) of growing lava domes is critical for hazard assessment. However, most techniques require expertise and time (e.g. photogrammetry) or expensive equipment (e.g. laser scanning and radar-based techniques). Here, we use a photo-based approach developed within the computer vision community that offers the potential for near-automatic DEM construction using a consumer-grade digital camera and freely available software. The technique is based on a combination of structure-from-motion and multi-view stereo algorithms (SfM-MVS) and can generate dense 3D point clouds (millions of points) from multiple photographs of a scene taken from different positions. Processing is carried out by automated 'reconstruction pipeline' software downloadable from the internet, e.g. http://blog.neonascent.net/archives/bundler-photogrammetry-package/. Such reconstructions are initally un-scaled and un-oriented so additional software (http://www.lancs.ac.uk/ staff/jamesm/software/sfm_georef.htm) has been developed to permit scaling or full georeferencing. Although this step requires the presence of some control points or knowledge of scale within the scene, it does not have the relatively strict image acquisition and control requirements of traditional photogrammetry. For accuracy and to allow error analysis, georeferencing observations are made within the image set, rather than requiring feature matching within the point cloud. Here we demonstrate the results of using the technique for deriving 3D models of the Volcán de Colima lava dome. 5 image sets have been collected by different people over a period of 12 months during overflights in a light aircraft. Although the resulting imagery is of variable quality for 3D reconstruction, useful data can be extracted from each set. Scaling and georeferencing is carried out using a combination of ortho-imagery (downloaded from Bing) and a few GPS points. Overall precisions are ~1 m and DEM qualities

  10. Ganymede crater dimensions from Galileo-based DEMs

    NASA Astrophysics Data System (ADS)

    Bray, V. J.; Schenk, P.; Melosh, H. J.; McEwen, A. S.; Morgan, J. V.; Collins, G. S.

    2010-12-01

    Images returned from the Voyager mission have allowed the analysis of crater morphology on the icy satellites and the construction of both diameter and depth-related scaling laws. Higher resolution Galileo data has since been used to update the diameter-related scaling trends, and also crater depths on the basis of shadow measurements. Our work adds to this wealth of data with new depth and slope information extracted from digital elevation models (DEMs) created from Galileo Solid State Imager (SSI) images, with the use of the stereo scene-recognition algorithm developed by Schenk et al. (2004), and from photoclinometry incorporating the combined lunar-Lambert photometric function as defined by McEwen et al. (1991). We profiled ~80 craters, ranging from 4 km to 100 km in diameter. Once each DEM of a crater was obtained, spurious patterns or shape distortions created by radiation noise or data compression artifacts were removed through the use of standard image noise filters, and manually by visual inspection of the DEM and original image(s). Terrain type was noted during profile collection so that any differences in crater trends on bright and dark terrains could be documented. Up to 16 cross-sectional profiles were taken across each crater so that the natural variation of crater dimensions with azimuth could be included in the measurement error. This already incorporates a systematic error on depth measurements of ~ 5%, an improvement from Voyager depth uncertainties of 10-30%. The crater diameter, depth, wall slope, rim height, central uplift height, diameter and slope, and central pit depth, diameter and slope were measured from each profile. Our measurements of feature diameters and of crater depth are consistent with already published results based on measurement from images and shadow lengths. We will present example topographic profiles and scaling trends, specifically highlighting the new depth and slope information for different crater types on Ganymede

  11. Evaluating Diversity Metrics: A Critique of the Equity Index Method

    ERIC Educational Resources Information Center

    Royal, Kenneth D.; Flammer, Keven

    2015-01-01

    Background: Evaluating diversity, inclusivity, and equity remains both a prevalent topic in education and a difficult challenge for most evaluators. Traditional metrics used to evaluate these constructs include questionnaires, focus groups, and anonymous comment solicitations. While each of these approaches offer value, they also possess a number…

  12. Quality of DEMs derived from Kite Aerial Photogrammety System: a case study of Dutch coastal environments.

    NASA Astrophysics Data System (ADS)

    Paron, Paolo; Smith, Mike J.; Anders, Niels; Meesuk, Vorawit

    2014-05-01

    Coastal protection is one of the main challenges for the Netherlands, where a large proportion of anthropogenic activity is located below sea level (both residential and economic). The Dutch government is implementing an innovative method of coastal replenishment using natural waves and winds to relocate sand from one side to the other of the country. This requires close monitoring of the spatio-temporal evolution of beaches in order to correctly model the future direction and amount of sand movement. To do so -on the onshore beach- we tested a Kite-Aerial Photography System for monitoring the beach dynamics at Zandmotor (http://www.dezandmotor.nl/en-GB/). The equipment used for data collection were a commercial DSLR camera (Nikon D7000 with a 20mm lens), gyro-levelled rig, Sutton Flowform 16 kite and Leica GNSS Viva GS10, with GSM connection to the Dutch geodetic network. We flew using a 115 m line with an average inclination of 40 to 45°; this gave a camera vertical distance of ~80 m and pixel size of ~20 mm. The methodology follows that of Smith et al. (2009), and of Paron & Smith (2013), applied to a highly dynamic environment with low texture and small relief conditions. Here we present a comparison of the quality of the digital elevation model (DEM) generated from the same dataset using two different systems: Structure from Motion (SfM) using Agisoft Photoscan Pro and traditional photogrammetry using Leica Photograpmmetry Suite. In addition the outputs from the two data processing methods are presented, including both an image mosaic and DEM, and highlighting pros and cons of both methods. References Smith, M. J. et al. 2009. High spatial resolution data acquisition for the geosciences: kite aerial photography. ESPL, 34(1), 155-161. Paron, P., Smith, M.J. 2013. Kite aerial photogrammetry system for monitoring coastal change in the Netherlands. 8th IAG International Conference on Geomorphology, Paris, August.

  13. Implementation of large-scale landscape evolution modelling to real high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Schroeder, S.; Babeyko, A. Y.

    2012-12-01

    We have developed a surface evolution model to be naturally integrated with 3D thermomechanical codes like SLIM-3D to study coupled tectonic-climate interaction. The resolution of the surface evolution model is independent of that of the underlying continuum box. The surface model follows the concept of the cellular automaton implemented on a regular Eulerian mesh. It incorporates an effective filling algorithm that guarantees flow direction in each cell, D8 search for flow directions, computation of discharges and bedrock incision. Additionally, the model implements hillslope erosion in the form of non-linear, slope-dependent diffusion. The model was designed to be employed not only to synthetic topographies but also to real Digital Elevation Models (DEM). In present work we report our experience with model implication to the 30-meter resolution ASTER GDEM of the Pamir orogen, in particular, to the segment of the Panj river. We start with calibration of the model parameters (fluvial incision and hillslope diffusion coefficients) using direct measurements of Panj incision rates and volumes of suspended sediment transport. Since the incision algorithm is independent on hillslope processes, we first adjust the incision parameters. Power-law exponents of the incision equation were evaluated from the profile curvature of the main Pamir rivers. After that, incision coefficient was adjusted to fit the observed incision rate of 5 mm/y. Once the model results are consistent with the measured data, the calibration of hillslope processes follows. For given critical slope, diffusivity could be fitted to match the observed sediment discharge. Applying of surface evolution model to real DEM reveals specific problems which do not appear when working with synthetic landscapes. One of them is the noise of the satellite-measured topography. In particular, due to the non-vertical observation perspective, satellite may not be able to detect the bottom of the river channel, especially

  14. Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung

    2014-05-01

    -based accuracy assessment is conducted by quantitatively comparing extracted landslide objects with landslide polygons that were visually interpreted by local experts. The applicability and transferability of the mapping system are evaluated by comparing initial accuracies with those achieved for the following two tests: first, usage of a SPOT image from the same year, but for a different area within the Baichi catchment; second, usage of SPOT images from multiple years for the same region. The integration of the common knowledge via digital landslide signatures is new in object-based landslide studies. In combination with strategies to optimize image segmentation this may lead to a more objective, transferable and stable knowledge-based system for the mapping of landslides from optical satellite data and DEMs.

  15. Mass spectrometry-based carboxyl footprinting of proteins: Method evaluation

    SciTech Connect

    Zhang, Hao; Wen, Jianzhong; Huang, Richard Y-C.; Blankenship, Robert E.; Gross, Michael L.

    2012-02-01

    Protein structure determines function in biology, and a variety of approaches have been employed to obtain structural information about proteins. Mass spectrometry-based protein footprinting is one fast-growing approach. One labeling-based footprinting approach is the use of a water-soluble carbodiimide, 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC) and glycine ethyl ester (GEE) to modify solvent-accessible carboxyl groups on glutamate (E) and aspartate (D). This paper describes method development of carboxyl-group modification in protein footprinting. The modification protocol was evaluated by using the protein calmodulin as a model. Because carboxyl-group modification is a slow reaction relative to protein folding and unfolding, there is an issue that modifications at certain sites may induce protein unfolding and lead to additional modification at sites that are not solvent-accessible in the wild-type protein. We investigated this possibility by using hydrogen deuterium amide exchange (H/DX). The study demonstrated that application of carboxyl group modification in probing conformational changes in calmodulin induced by Ca{sup 2+} binding provides useful information that is not compromised by modification-induced protein unfolding.

  16. Consideration of the production methods and safety evaluation of cytokines.

    PubMed

    Liu, D T

    1988-01-01

    Cytokines are natural endogenous substances whose biological effects in humans are little known when given in therapeutic rather than physiologic doses. Yet, there is intense interest in seeking their possible clinical use. While E. coli are effective in making "simple proteins" with few disulfide bonds, mammalian cells are becoming more generally used for the production of "complex proteins" with multiple disulfide bonds and glycoproteins. There appears to be much less concern about the safety of possibly oncogenic residual DNA from transformed cell lines, but viral contamination of products continues to be an active concern. Both physicochemical and biological methods are necessary to establish the identity, purity and potency of biological drugs. For proteins to manifest their proper biological and therapeutic effects in humans, their correct conformation must be maintained throughout production, purification and formulation. Regulating novel biological drugs such as the cytokines might raise new scientific issues that are not currently apparent, but the basic principles involved will be consistent with those used to evaluate other biologics, e.g., sound scientific principles, flexibility, case-by-case approach, good common sense and risk vs benefit assessment.

  17. Evaluation of DNA extraction methods for freshwater eukaryotic microalgae.

    PubMed

    Eland, Lucy E; Davenport, Russell; Mota, Cesar R

    2012-10-15

    The use of molecular methods to investigate microalgal communities of natural and engineered freshwater resources is in its infancy, with the majority of previous studies carried out by microscopy. Inefficient or differential DNA extraction of microalgal community members can lead to bias in downstream community analysis. Three commercially available DNA extraction kits have been tested on a range of pure culture freshwater algal species with diverse cell walls and mixed algal cultures taken from eutrophic waste stabilization ponds (WSP). DNA yield and quality were evaluated, along with DNA suitability for amplification of 18S rRNA gene fragments by polymerase chain reaction (PCR). QiagenDNeasy(®) Blood and Tissue kit (QBT), was found to give the highest DNA yields and quality. Denaturant Gradient Gel Electrophoresis (DGGE) was used to assess the diversity of communities from which DNA was extracted. No significant differences were found among kits when assessing diversity. QBT is recommended for use with WSP samples, a conclusion confirmed by further testing on communities from two tropical WSP systems. The fixation of microalgal samples with ethanol prior to DNA extraction was found to reduce yields as well as diversity and is not recommended.

  18. Efficacy methods to evaluate health communication and marketing campaigns.

    PubMed

    Evans, W Douglas; Uhrig, Jennifer; Davis, Kevin; McCormack, Lauren

    2009-06-01

    Communication and marketing are growing areas of health research, but relatively few rigorous e