Sample records for depth estimation methods

  1. Potential-scour assessments and estimates of scour depth using different techniques at selected bridge sites in Missouri

    USGS Publications Warehouse

    Huizinga, Richard J.; Rydlund, Jr., Paul H.

    2004-01-01

    The evaluation of scour at bridges throughout the state of Missouri has been ongoing since 1991 in a cooperative effort by the U.S. Geological Survey and Missouri Department of Transportation. A variety of assessment methods have been used to identify bridges susceptible to scour and to estimate scour depths. A potential-scour assessment (Level 1) was used at 3,082 bridges to identify bridges that might be susceptible to scour. A rapid estimation method (Level 1+) was used to estimate contraction, pier, and abutment scour depths at 1,396 bridge sites to identify bridges that might be scour critical. A detailed hydraulic assessment (Level 2) was used to compute contraction, pier, and abutment scour depths at 398 bridges to determine which bridges are scour critical and would require further monitoring or application of scour countermeasures. The rapid estimation method (Level 1+) was designed to be a conservative estimator of scour depths compared to depths computed by a detailed hydraulic assessment (Level 2). Detailed hydraulic assessments were performed at 316 bridges that also had received a rapid estimation assessment, providing a broad data base to compare the two scour assessment methods. The scour depths computed by each of the two methods were compared for bridges that had similar discharges. For Missouri, the rapid estimation method (Level 1+) did not provide a reasonable conservative estimate of the detailed hydraulic assessment (Level 2) scour depths for contraction scour, but the discrepancy was the result of using different values for variables that were common to both of the assessment methods. The rapid estimation method (Level 1+) was a reasonable conservative estimator of the detailed hydraulic assessment (Level 2) scour depths for pier scour if the pier width is used for piers without footing exposure and the footing width is used for piers with footing exposure. Detailed hydraulic assessment (Level 2) scour depths were conservatively estimated by the rapid estimation method (Level 1+) for abutment scour, but there was substantial variability in the estimates and several substantial underestimations.

  2. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  3. Methods and Systems for Characterization of an Anomaly Using Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M. (Inventor)

    2013-01-01

    A method for characterizing an anomaly in a material comprises (a) extracting contrast data; (b) measuring a contrast evolution; (c) filtering the contrast evolution; (d) measuring a peak amplitude of the contrast evolution; (d) determining a diameter and a depth of the anomaly, and (e) repeating the step of determining the diameter and the depth of the anomaly until a change in the estimate of the depth is less than a set value. The step of determining the diameter and the depth of the anomaly comprises estimating the depth using a diameter constant C.sub.D equal to one for the first iteration of determining the diameter and the depth; estimating the diameter; and comparing the estimate of the depth of the anomaly after each iteration of estimating to the prior estimate of the depth to calculate the change in the estimate of the depth of the anomaly.

  4. Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM

    NASA Technical Reports Server (NTRS)

    Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.

    2014-01-01

    Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.

  5. Sedimentary basins reconnaissance using the magnetic Tilt-Depth method

    USGS Publications Warehouse

    Salem, A.; Williams, S.; Samson, E.; Fairhead, D.; Ravat, D.; Blakely, R.J.

    2010-01-01

    We compute the depth to the top of magnetic basement using the Tilt-Depth method from the best available magnetic anomaly grids covering the continental USA and Australia. For the USA, the Tilt-Depth estimates were compared with sediment thicknesses based on drilling data and show a correlation of 0.86 between the datasets. If random data were used then the correlation value goes to virtually zero. There is little to no lateral offset of the depth of basinal features although there is a tendency for the Tilt-Depth results to be slightly shallower than the drill depths. We also applied the Tilt-Depth method to a local-scale, relatively high-resolution aeromagnetic survey over the Olympic Peninsula of Washington State. The Tilt-Depth method successfully identified a variety of important tectonic elements known from geological mapping. Of particular interest, the Tilt-Depth method illuminated deep (3km) contacts within the non-magnetic sedimentary core of the Olympic Mountains, where magnetic anomalies are subdued and low in amplitude. For Australia, the Tilt-Depth estimates also give a good correlation with known areas of shallow basement and sedimentary basins. Our estimates of basement depth are not restricted to regional analysis but work equally well at the micro scale (basin scale) with depth estimates agreeing well with drill hole and seismic data. We focus on the eastern Officer Basin as an example of basin scale studies and find a good level of agreement between previously-derived basin models. However, our study potentially reveals depocentres not previously mapped due to the sparse distribution of well data. This example thus shows the potential additional advantage of the method in geological interpretation. The success of this study suggests that the Tilt-Depth method is useful in estimating the depth to crystalline basement when appropriate quality aeromagnetic anomaly data are used (i.e. line spacing on the order of or less than the expected depth to basement). The method is especially valuable as a reconnaissance tool in regions where drillhole or seismic information are either scarce, lacking, or ambiguous.

  6. The Effect of Finite Thickness Extent on Estimating Depth to Basement from Aeromagnetic Data

    NASA Astrophysics Data System (ADS)

    Blakely, R. J.; Salem, A.; Green, C. M.; Fairhead, D.; Ravat, D.

    2014-12-01

    Depth to basement estimation methods using various components of the spectral content of magnetic anomalies are in common use by geophysicists. Examples of these are the Tilt-Depth and SPI methods. These methods use simple models having the base of the magnetic body at infinity. Recent publications have shown that this 'infinite depth' assumption causes underestimation of the depth to the top of sources, especially in areas where the bottom of the magnetic layer is shallow, as would occur in high heat-flow regions. This error has been demonstrated in both model studies and using real data with seismic or well control. To overcome the limitation of infinite depth this contribution presents the mathematics for a finite depth contact body in the Tilt depth and SPI methods and applies it to the central Red Sea where the Curie isotherm and Moho are shallow. The difference in the depth estimation between the infinite and finite contacts is such a case is significant and can exceed 200%.

  7. Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin

    USGS Publications Warehouse

    Casto, Daniel W.

    2001-01-01

    Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.

  8. Estimating snow depth of alpine snowpack via airborne multifrequency passive microwave radiance observations: Colorado, USA

    NASA Astrophysics Data System (ADS)

    Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.

    2017-12-01

    This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.

  9. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-05-18

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.

  10. Technique for estimating depth of floods in Tennessee

    USGS Publications Warehouse

    Gamble, C.R.

    1983-01-01

    Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)

  11. 3D depth-to-basement and density contrast estimates using gravity and borehole data

    NASA Astrophysics Data System (ADS)

    Barbosa, V. C.; Martins, C. M.; Silva, J. B.

    2009-05-01

    We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.

  12. Combining binary decision tree and geostatistical methods to estimate snow distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, Benjamin; Elder, Kelly

    2000-01-01

    We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.

  13. Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.

    PubMed

    Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves

    2016-09-01

    This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.

  14. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-02-08

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.

  15. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination

    PubMed Central

    2018-01-01

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759

  16. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  17. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector

    PubMed Central

    2018-01-01

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644

  18. Impact of Planetary Boundary Layer Depth on Climatological Tracer Transport in the GEOS-5 AGCM

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2013-12-01

    Planetary boundary layer (PBL) processes have large implications for tropospheric tracer transport since surface fluxes are diluted by the depth of the PBL through vertical mixing. However, no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to diagnose PBL depth and produce climatologies that are evaluated here. All seven methods evaluate a single atmosphere so differences are related solely to the definition chosen. PBL depths that are estimated using a Richardson number are shallower than those given by methods based on the scalar diffusivity during warm, moist conditions at midday and collapse to lower values at night. In GEOS-5, the PBL depth is used in the estimation of the turbulent length scale and so impacts vertical mixing. Changing the method used to determine the PBL depth for this length scale thus changes the tracer transport. Using a bulk Richardson number method instead of a scalar diffusivity method produces changes in the quantity of Saharan dust lofted into the free troposphere and advected to North America, with more surface dust in North America during boreal summer and less in boreal winter. Additionally, greenhouse gases are considerably impacted. During boreal winter, changing the PBL depth definition produces carbon dioxide differences of nearly 5 ppm over Siberia and gradients of about 5 ppm over 1000 km in Europe. PBL depth changes are responsible for surface carbon monoxide changes of 20 ppb or more over the biomass burning regions of Africa.

  19. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  20. Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances

    NASA Astrophysics Data System (ADS)

    Stroujkova, A.; Reiter, D. T.; Shumway, R. H.

    2006-12-01

    The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.

  1. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  2. Stereoscopic perception of real depths at large distances.

    PubMed

    Palmisano, Stephen; Gillam, Barbara; Govan, Donovan G; Allison, Robert S; Harris, Julie M

    2010-06-01

    There has been no direct examination of stereoscopic depth perception at very large observation distances and depths. We measured perceptions of depth magnitude at distances where it is frequently reported without evidence that stereopsis is non-functional. We adapted methods pioneered at distances up to 9 m by R. S. Allison, B. J. Gillam, and E. Vecellio (2009) for use in a 381-m-long railway tunnel. Pairs of Light Emitting Diode (LED) targets were presented either in complete darkness or with the environment lit as far as the nearest LED (the observation distance). We found that binocular, but not monocular, estimates of the depth between pairs of LEDs increased with their physical depths up to the maximum depth separation tested (248 m). Binocular estimates of depth were much larger with a lit foreground than in darkness and increased as the observation distance increased from 20 to 40 m, indicating that binocular disparity can be scaled for much larger distances than previously realized. Since these observation distances were well beyond the range of vertical disparity and oculomotor cues, this scaling must rely on perspective cues. We also ran control experiments at smaller distances, which showed that estimates of depth and distance correlate poorly and that our metric estimation method gives similar results to a comparison method under the same conditions.

  3. Spectral analysis of aeromagnetic profiles for depth estimation principles, software, and practical application

    USGS Publications Warehouse

    Sadek, H.S.; Rashad, S.M.; Blank, H.R.

    1984-01-01

    If proper account is taken of the constraints of the method, it is capable of providing depth estimates to within an accuracy of about 10 percent under suitable circumstances. The estimates are unaffected by source magnetization and are relatively insensitive to assumptions as to source shape or distribution. The validity of the method is demonstrated by analyses of synthetic profiles and profiles recorded over Harrat Rahat, Saudi Arabia, and Diyur, Egypt, where source depths have been proved by drilling.

  4. Improved depth estimation with the light field camera

    NASA Astrophysics Data System (ADS)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  5. A new, improved and fully automatic method for teleseismic depth estimation of moderate earthquakes (4.5 < M < 5.5): application to the Guerrero subduction zone (Mexico)

    NASA Astrophysics Data System (ADS)

    Letort, Jean; Guilbert, Jocelyn; Cotton, Fabrice; Bondár, István; Cano, Yoann; Vergoz, Julien

    2015-06-01

    The depth of an earthquake is difficult to estimate because of the trade-off between depth and origin time estimations, and because it can be biased by lateral Earth heterogeneities. To face this challenge, we have developed a new, blind and fully automatic teleseismic depth analysis. The results of this new method do not depend on epistemic uncertainties due to depth-phase picking and identification. The method consists of a modification of the cepstral analysis from Letort et al. and Bonner et al., which aims to detect surface reflected (pP, sP) waves in a signal at teleseismic distances (30°-90°) through the study of the spectral holes in the shape of the signal spectrum. The ability of our automatic method to improve depth estimations is shown by relocation of the recent moderate seismicity of the Guerrero subduction area (Mexico). We have therefore estimated the depth of 152 events using teleseismic data from the IRIS stations and arrays. One advantage of this method is that it can be applied for single stations (from IRIS) as well as for classical arrays. In the Guerrero area, our new cepstral analysis efficiently clusters event locations and provides an improved view of the geometry of the subduction. Moreover, we have also validated our method through relocation of the same events using the new International Seismological Centre (ISC)-locator algorithm, as well as comparing our cepstral depths with the available Harvard-Centroid Moment Tensor (CMT) solutions and the three available ground thrust (GT5) events (where lateral localization is assumed to be well constrained with uncertainty <5 km) for this area. These comparisons indicate an overestimation of focal depths in the ISC catalogue for deeper parts of the subduction, and they show a systematic bias between the estimated cepstral depths and the ISC-locator depths. Using information from the CMT catalogue relating to the predominant focal mechanism for this area, this bias can be explained as a misidentification of sP phases by pP phases, which shows the greater interest for the use of this new automatic cepstral analysis, as it is less sensitive to phase identification.

  6. Estimation of subsurface thermal structure using sea surface height and sea surface temperature

    NASA Technical Reports Server (NTRS)

    Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)

    2012-01-01

    A method of determining a subsurface temperature in a body of water is disclosed. The method includes obtaining surface temperature anomaly data and surface height anomaly data of the body of water for a region of interest, and also obtaining subsurface temperature anomaly data for the region of interest at a plurality of depths. The method further includes regressing the obtained surface temperature anomaly data and surface height anomaly data for the region of interest with the obtained subsurface temperature anomaly data for the plurality of depths to generate regression coefficients, estimating a subsurface temperature at one or more other depths for the region of interest based on the generated regression coefficients and outputting the estimated subsurface temperature at the one or more other depths. Using the estimated subsurface temperature, signal propagation times and trajectories of marine life in the body of water are determined.

  7. Technique for estimating depth of 100-year floods in Tennessee

    USGS Publications Warehouse

    Gamble, Charles R.; Lewis, James G.

    1977-01-01

    Preface: A method is presented for estimating the depth of the loo-year flood in four hydrologic areas in Tennessee. Depths at 151 gaging stations on streams that were not significantly affected by man made changes were related to basin characteristics by multiple regression techniques. Equations derived from the analysis can be used to estimate the depth of the loo-year flood if the size of the drainage basin is known.

  8. Depth Estimates for Slingram Electromagnetic Anomalies from Dipping Sheet-like Bodies by the Normalized Full Gradient Method

    NASA Astrophysics Data System (ADS)

    Dondurur, Derman

    2005-11-01

    The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.

  9. Depth-estimation-enabled compound eyes

    NASA Astrophysics Data System (ADS)

    Lee, Woong-Bi; Lee, Heung-No

    2018-04-01

    Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.

  10. Estimation of depth to magnetic source using maximum entropy power spectra, with application to the Peru-Chile Trench

    USGS Publications Warehouse

    Blakely, Richard J.

    1981-01-01

    Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.

  11. Spectrally based bathymetric mapping of a dynamic, sand‐bedded channel: Niobrara River, Nebraska, USA

    USGS Publications Warehouse

    Dilbone, Elizabeth; Legleiter, Carl; Alexander, Jason S.; McElroy, Brandon

    2018-01-01

    Methods for spectrally based mapping of river bathymetry have been developed and tested in clear‐flowing, gravel‐bed channels, with limited application to turbid, sand‐bed rivers. This study used hyperspectral images and field surveys from the dynamic, sandy Niobrara River to evaluate three depth retrieval methods. The first regression‐based approach, optimal band ratio analysis (OBRA), paired in situ depth measurements with image pixel values to estimate depth. The second approach used ground‐based field spectra to calibrate an OBRA relationship. The third technique, image‐to‐depth quantile transformation (IDQT), estimated depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image‐derived variable. OBRA yielded the lowest depth retrieval mean error (0.005 m) and highest observed versus predicted R2 (0.817). Although misalignment between field and image data did not compromise the performance of OBRA in this study, poor georeferencing could limit regression‐based approaches such as OBRA in dynamic, sand‐bedded rivers. Field spectroscopy‐based depth maps exhibited a mean error with a slight shallow bias (0.068 m) but provided reliable estimates for most of the study reach. IDQT had a strong deep bias but provided informative relative depth maps. Overprediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the depth CDF. Although each of the techniques we tested demonstrated potential to provide accurate depth estimates in sand‐bed rivers, each method also was subject to certain constraints and limitations.

  12. Processing and interpretation of aeromagnetic data for the Santa Cruz Basin - Patagonia Mountains area, south-central Arizona

    USGS Publications Warehouse

    Phillips, Jeffrey D.

    2002-01-01

    In 1997, the U.S. Geological Survey (USGS) contracted with Sial Geosciences Inc. for a detailed aeromagnetic survey of the Santa Cruz basin and Patagonia Mountains area of south-central Arizona. The contractor's Operational Report is included as an Appendix in this report. This section describes the data processing performed by the USGS on the digital aeromagnetic data received from the contractor. This processing was required in order to remove flight line noise, estimate the depths to the magnetic sources, and estimate the locations of the magnetic contacts. Three methods were used for estimating source depths and contact locations: the horizontal gradient method, the analytic signal method, and the local wavenumber method. The depth estimates resulting from each method are compared, and the contact locations are combined into an interpretative map showing the dip direction for some contacts.

  13. Tensor-guided fitting of subduction slab depths

    USGS Publications Warehouse

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  14. Method for rapid estimation of scour at highway bridges based on limited site data

    USGS Publications Warehouse

    Holnbeck, S.R.; Parrett, Charles

    1997-01-01

    Limited site data were used to develop a method for rapid estimation of scour at highway bridges. The estimates can be obtained in a matter of hours rather than several days as required by more-detailed methods. Such a method is important because scour assessments are needed to identify scour-critical bridges throughout the United States. Using detailed scour-analysis methods and scour-prediction equations recommended by the Federal Highway Administration, the U.S. Geological Survey, in cooperation with the Montana Department of Transportation, obtained contraction, pier, and abutment scour-depth data for sites from 10 States.The data were used to develop relations between scour depth and hydraulic variables that can be rapidly measured in the field. Relations between scour depth and hydraulic variables, in the form of envelope curves, were based on simpler forms of detailed scour-prediction equations. To apply the rapid-estimation method, a 100-year recurrence interval peak discharge is determined, and bridge- length data are used in the field with graphs relating unit discharge to velocity and velocity to bridge backwater as a basis for estimating flow depths and other hydraulic variables that can then be applied using the envelope curves. The method was tested in the field. Results showed good agreement among individuals involved and with results from more-detailed methods. Although useful for identifying potentially scour-critical bridges, themethod does not replace more-detailed methods used for design purposes. Use of the rapid- estimation method should be limited to individuals having experience in bridge scour, hydraulics, and flood hydrology, and some training in use of the method.

  15. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps

    PubMed Central

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-01-01

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003

  16. Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries (Open Access)

    DTIC Science & Technology

    2014-09-05

    RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS 1 Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries S. Hussain Raza1...electronic forms. ar X iv :1 51 0. 07 31 7v 1 [ cs .C V ] 2 5 O ct 2 01 5 2 RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS Frame Ground Truth Depth...temporal segmentation using the method proposed by Grundmann et al . [4]. estimation and triangulation to estimate depth maps [17, 27](see Figure 1). In

  17. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    PubMed Central

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933

  18. Event-Based Stereo Depth Estimation Using Belief Propagation.

    PubMed

    Xie, Zhen; Chen, Shengyong; Orchard, Garrick

    2017-01-01

    Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are not typically suitable for these event-based data and new processing algorithms are required. This paper focuses on the problem of depth estimation from a stereo pair of event-based sensors. A fully event-based stereo depth estimation algorithm which relies on message passing is proposed. The algorithm not only considers the properties of a single event but also uses a Markov Random Field (MRF) to consider the constraints between the nearby events, such as disparity uniqueness and depth continuity. The method is tested on five different scenes and compared to other state-of-art event-based stereo matching methods. The results show that the method detects more stereo matches than other methods, with each match having a higher accuracy. The method can operate in an event-driven manner where depths are reported for individual events as they are received, or the network can be queried at any time to generate a sparse depth frame which represents the current state of the network.

  19. Determining the depth of certain gravity sources without a priori specification of their structural index

    NASA Astrophysics Data System (ADS)

    Zhou, Shuai; Huang, Danian

    2015-11-01

    We have developed a new method for the interpretation of gravity tensor data based on the generalized Tilt-depth method. Cooper (2011, 2012) extended the magnetic Tilt-depth method to gravity data. We take the gradient-ratio method of Cooper (2011, 2012) and modify it so that the source type does not need to be specified a priori. We develop the new method by generalizing the Tilt-depth method for depth estimation for different types of source bodies. The new technique uses only the three vertical tensor components of the full gravity tensor data observed or calculated at different height plane to estimate the depth of the buried bodies without a priori specification of their structural index. For severely noise-corrupted data, our method utilizes different upward continuation height data, which can effectively reduce the influence of noise. Theoretical simulations of the gravity source model with and without noise illustrate the ability of the method to provide source depth information. Additionally, the simulations demonstrate that the new method is simple, computationally fast and accurate. Finally, we apply the method using the gravity data acquired over the Humble Salt Dome in the USA as an example. The results show a good correspondence to the previous drilling and seismic interpretation results.

  20. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  1. Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water

    PubMed Central

    Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong

    2017-01-01

    This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442

  2. Crack orientation and depth estimation in a low-pressure turbine disc using a phased array ultrasonic transducer and an artificial neural network.

    PubMed

    Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang

    2013-09-13

    Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks.

  3. Crack Orientation and Depth Estimation in a Low-Pressure Turbine Disc Using a Phased Array Ultrasonic Transducer and an Artificial Neural Network

    PubMed Central

    Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang

    2013-01-01

    Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks. PMID:24064602

  4. BERG2 Micro-computer Estimation of Freeze and Thaw Depths and Thaw Consolidation (PDF file)

    DOT National Transportation Integrated Search

    1989-06-01

    The BERG2 microcomputer program uses a methology similar to the Modified Berggren method (Aldrich and Paynter, 1953) to estimate the freeze and thaw depths in layered soil systems. The program also provides an estimate of the thaw consolidation in ic...

  5. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  6. Estimation of bedrock depth using the horizontal‐to‐vertical (H/V) ambient‐noise seismic method

    USGS Publications Warehouse

    Lane, John W.; White, Eric A.; Steele, Gregory V.; Cannia, James C.

    2008-01-01

    Estimating sediment thickness and the geometry of the bedrock surface is a key component of many hydrogeologic studies. The horizontal‐to‐vertical (H/V) ambient‐noise seismic method is a novel, non‐invasive technique that can be used to rapidly estimate the depth to bedrock. The H/V method uses a single, broad‐band three‐component seismometer to record ambient seismic noise. The ratio of the averaged horizontal‐to‐vertical frequency spectrum is used to determine the fundamental site resonance frequency, which can be interpreted using regression equations to estimate sediment thickness and depth to bedrock. The U.S. Geological Survey used the H/V seismic method during fall 2007 at 11 sites in Cape Cod, Massachusetts, and 13 sites in eastern Nebraska. In Cape Cod, H/V measurements were acquired along a 60‐kilometer (km) transect between Chatham and Provincetown, where glacial sediments overlie metamorphic rock. In Nebraska, H/V measurements were acquired along approximately 11‐ and 14‐km transects near Firth and Oakland, respectively, where glacial sediments overlie weathered sedimentary rock. The ambient‐noise seismic data from Cape Cod produced clear, easily identified resonance frequency peaks. The interpreted depth and geometry of the bedrock surface correlate well with boring data and previously published seismic refraction surveys. Conversely, the ambient‐noise seismic data from eastern Nebraska produced subtle resonance frequency peaks, and correlation of the interpreted bedrock surface with bedrock depths from borings is poor, which may indicate a low acoustic impedance contrast between the weathered sedimentary rock and overlying sediments and/or the effect of wind noise on the seismic records. Our results indicate the H/V ambient‐noise seismic method can be used effectively to estimate the depth to rock where there is a significant acoustic impedance contrast between the sediments and underlying rock. However, effective use of the method is challenging in the presence of gradational contacts such as gradational weathering or cementation. Further work is needed to optimize interpretation of resonance frequencies in the presence of extreme wind noise. In addition, local estimates of bedrock depth likely could be improved through development of regional or study‐area‐specific regression equations relating resonance frequency to bedrock depth.

  7. Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2014-07-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  8. Estimating soil water content from ground penetrating radar coarse root reflections

    NASA Astrophysics Data System (ADS)

    Liu, X.; Cui, X.; Chen, J.; Li, W.; Cao, X.

    2016-12-01

    Soil water content (SWC) is an indispensable variable for understanding the organization of natural ecosystems and biodiversity. Especially in semiarid and arid regions, soil moisture is the plants primary source of water and largely determine their strategies for growth and survival, such as root depth, distribution and competition between them. Ground penetrating radar (GPR), a kind of noninvasive geophysical technique, has been regarded as an accurate tool for measuring soil water content at intermediate scale in past decades. For soil water content estimation with surface GPR, fixed antenna offset reflection method has been considered to have potential to obtain average soil water content between land surface and reflectors, and provide high resolution and few measurement time. In this study, 900MHz surface GPR antenna was used to estimate SWC with fixed offset reflection method; plant coarse roots (with diameters greater than 5 mm) were regarded as reflectors; a kind of advanced GPR data interpretation method, HADA (hyperbola automatic detection algorithm), was introduced to automatically obtain average velocity by recognizing coarse root hyperbolic reflection signals on GPR radargrams during estimating SWC. In addition, a formula was deduced to determine interval average SWC between two roots at different depths as well. We examined the performance of proposed method on a dataset simulated under different scenarios. Results showed that HADA could provide a reasonable average velocity to estimate SWC without knowledge of root depth and interval average SWC also be determined. When the proposed method was applied to estimation of SWC on a real-field measurement dataset, a very small soil water content vertical variation gradient about 0.006 with depth was captured as well. Therefore, the proposed method could be used to estimate average soil water content from ground penetrating radar coarse root reflections and obtain interval average SWC between two roots at different depths. It is very promising for measuring root-zone-soil-moisture and mapping soil moisture distribution around a shrub or even in field plot scale.

  9. Estimated and measured bridge scour at selected sites in North Dakota, 1990-97

    USGS Publications Warehouse

    Williams-Sether, Tara

    1999-01-01

    A Level 2 bridge scour method was used to estimate scour depths at 36 selected bridge sites located on the primary road system throughout North Dakota. Of the 36 bridge sites analyzed, the North Dakota Department of Transportation rated 15 as scour critical. Flood and scour data were collected at 19 of the 36 selected bridge sites during 1990-97. Data collected were sufficient to estimate pier scour but not contraction or abutment scour. Estimated pier scour depths ranged from -10.6 to -1.2 feet, and measured bed-elevation changes at piers ranged from -2.31 to +2.37 feet. Comparisons between the estimated pier scour depths and the measured bed-elevation changes indicate that the pier scour equations overestimate scour at bridges in North Dakota.A Level 1.5 bridge scour method also was used to estimate scour depths at 495 bridge sites located on the secondary road system throughout North Dakota. The North Dakota Department of Transportation determined that 26 of the 495 bridge sites analyzed were potentially scour critical.

  10. Fusion of Kinect depth data with trifocal disparity estimation for near real-time high quality depth maps generation

    NASA Astrophysics Data System (ADS)

    Boisson, Guillaume; Kerbiriou, Paul; Drazic, Valter; Bureller, Olivier; Sabater, Neus; Schubert, Arno

    2014-03-01

    Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.

  11. Comparison of GEOS-5 AGCM Planetary Boundary Layer Depths Computed with Various Definitions

    NASA Technical Reports Server (NTRS)

    Mcgrath-Spangler, E. L.; Molod, A.

    2014-01-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Koppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  12. Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2014-03-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  13. Estimating nocturnal opaque ice cloud optical depth from MODIS multispectral infrared radiances using a neural network method

    NASA Astrophysics Data System (ADS)

    Minnis, Patrick; Hong, Gang; Sun-Mack, Szedung; Smith, William L.; Chen, Yan; Miller, Steven D.

    2016-05-01

    Retrieval of ice cloud properties using IR measurements has a distinct advantage over the visible and near-IR techniques by providing consistent monitoring regardless of solar illumination conditions. Historically, the IR bands at 3.7, 6.7, 11.0, and 12.0 µm have been used to infer ice cloud parameters by various methods, but the reliable retrieval of ice cloud optical depth τ is limited to nonopaque cirrus with τ < 8. The Ice Cloud Optical Depth from Infrared using a Neural network (ICODIN) method is developed in this paper by training Moderate Resolution Imaging Spectroradiometer (MODIS) radiances at 3.7, 6.7, 11.0, and 12.0 µm against CloudSat-estimated τ during the nighttime using 2 months of matched global data from 2007. An independent data set comprising observations from the same 2 months of 2008 was used to validate the ICODIN. One 4-channel and three 3-channel versions of the ICODIN were tested. The training and validation results show that IR channels can be used to estimate ice cloud τ up to 150 with correlations above 78% and 69% for all clouds and only opaque ice clouds, respectively. However, τ for the deepest clouds is still underestimated in many instances. The corresponding RMS differences relative to CloudSat are ~100 and ~72%. If the opaque clouds are properly identified with the IR methods, the RMS differences in the retrieved optical depths are ~62%. The 3.7 µm channel appears to be most sensitive to optical depth changes but is constrained by poor precision at low temperatures. A method for estimating total optical depth is explored for estimation of cloud water path in the future. Factors affecting the uncertainties and potential improvements are discussed. With improved techniques for discriminating between opaque and semitransparent ice clouds, the method can ultimately improve cloud property monitoring over the entire diurnal cycle.

  14. Evaluating analytical approaches for estimating pelagic fish biomass using simulated fish communities

    USGS Publications Warehouse

    Yule, Daniel L.; Adams, Jean V.; Warner, David M.; Hrabik, Thomas R.; Kocovsky, Patrick M.; Weidel, Brian C.; Rudstam, Lars G.; Sullivan, Patrick J.

    2013-01-01

    Pelagic fish assessments often combine large amounts of acoustic-based fish density data and limited midwater trawl information to estimate species-specific biomass density. We compared the accuracy of five apportionment methods for estimating pelagic fish biomass density using simulated communities with known fish numbers that mimic Lakes Superior, Michigan, and Ontario, representing a range of fish community complexities. Across all apportionment methods, the error in the estimated biomass generally declined with increasing effort, but methods that accounted for community composition changes with water column depth performed best. Correlations between trawl catch and the true species composition were highest when more fish were caught, highlighting the benefits of targeted trawling in locations of high fish density. Pelagic fish surveys should incorporate geographic and water column depth stratification in the survey design, use apportionment methods that account for species-specific depth differences, target midwater trawling effort in areas of high fish density, and include at least 15 midwater trawls. With relatively basic biological information, simulations of fish communities and sampling programs can optimize effort allocation and reduce error in biomass estimates.

  15. The importance of atmospheric correction for airborne hyperspectral remote sensing of shallow waters: application to depth estimation

    NASA Astrophysics Data System (ADS)

    Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe

    2017-10-01

    Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.

  16. Improving the Curie depth estimation through optimizing the spectral block dimensions of the aeromagnetic data in the Sabalan geothermal field

    NASA Astrophysics Data System (ADS)

    Akbar, Somaieh; Fathianpour, Nader

    2016-12-01

    The Curie point depth is of great importance in characterizing geothermal resources. In this study, the Curie iso-depth map was provided using the well-known method of dividing the aeromagnetic dataset into overlapping blocks and analyzing the power spectral density of each block separately. Determining the optimum block dimension is vital in improving the resolution and accuracy of estimating Curie point depth. To investigate the relation between the optimal block size and power spectral density, a forward magnetic modeling was implemented on an artificial prismatic body with specified characteristics. The top, centroid, and bottom depths of the body were estimated by the spectral analysis method for different block dimensions. The result showed that the optimal block size could be considered as the smallest possible block size whose corresponding power spectrum represents an absolute maximum in small wavenumbers. The Curie depth map of the Sabalan geothermal field and its surrounding areas, in the northwestern Iran, was produced using a grid of 37 blocks with different dimensions from 10 × 10 to 50 × 50 km2, which showed at least 50% overlapping with adjacent blocks. The Curie point depth was estimated in the range of 5 to 21 km. The promising areas with the Curie point depths less than 8.5 km are located around Mountain Sabalan encompassing more than 90% of known geothermal resources in the study area. Moreover, the Curie point depth estimated by the improved spectral analysis is in good agreement with the depth calculated from the thermal gradient data measured in one of the exploratory wells in the region.

  17. Deep learning-based depth estimation from a synthetic endoscopy image training set

    NASA Astrophysics Data System (ADS)

    Mahmood, Faisal; Durr, Nicholas J.

    2018-03-01

    Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.

  18. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  19. Inversely Estimating the Vertical Profile of the Soil CO2 Production Rate in a Deciduous Broadleaf Forest Using a Particle Filtering Method

    PubMed Central

    Sakurai, Gen; Yonemura, Seiichiro; Kishimoto-Mo, Ayaka W.; Murayama, Shohei; Ohtsuka, Toshiyuki; Yokozawa, Masayuki

    2015-01-01

    Carbon dioxide (CO2) efflux from the soil surface, which is a major source of CO2 from terrestrial ecosystems, represents the total CO2 production at all soil depths. Although many studies have estimated the vertical profile of the CO2 production rate, one of the difficulties in estimating the vertical profile is measuring diffusion coefficients of CO2 at all soil depths in a nondestructive manner. In this study, we estimated the temporal variation in the vertical profile of the CO2 production rate using a data assimilation method, the particle filtering method, in which the diffusion coefficients of CO2 were simultaneously estimated. The CO2 concentrations at several soil depths and CO2 efflux from the soil surface (only during the snow-free period) were measured at two points in a broadleaf forest in Japan, and the data were assimilated into a simple model including a diffusion equation. We found that there were large variations in the pattern of the vertical profile of the CO2 production rate between experiment sites: the peak CO2 production rate was at soil depths around 10 cm during the snow-free period at one site, but the peak was at the soil surface at the other site. Using this method to estimate the CO2 production rate during snow-cover periods allowed us to estimate CO2 efflux during that period as well. We estimated that the CO2 efflux during the snow-cover period (about half the year) accounted for around 13% of the annual CO2 efflux at this site. Although the method proposed in this study does not ensure the validity of the estimated diffusion coefficients and CO2 production rates, the method enables us to more closely approach the “actual” values by decreasing the variance of the posterior distribution of the values. PMID:25793387

  20. Size matters: Perceived depth magnitude varies with stimulus height.

    PubMed

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2016-06-01

    Both the upper and lower disparity limits for stereopsis vary with the size of the targets. Recently, Tsirlin, Wilcox, and Allison (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of a stimulus. To test this hypothesis we compared apparent depth in small discs to depth in long bars with equivalent width and disparity. We used three estimation techniques: a virtual ruler, a touch-sensor (for haptic estimates) and a disparity probe. We found that depth estimates were significantly larger for the bar stimuli than for the disc stimuli for all methods of estimation and different configurations. In a second experiment, we measured perceived depth as a function of the height of the bar and the radius of the disc. Perceived depth increased with increasing bar height and disc radius suggesting that disparity is integrated along the vertical edges. We discuss size-disparity correlation and inter-neural excitatory connections as potential mechanisms that could account for these results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Modeling intracavitary heating of the uterus by means of a balloon catheter

    NASA Astrophysics Data System (ADS)

    Olsrud, Johan; Friberg, Britt; Rioseco, Juan; Ahlgren, Mats; Persson, Bertil R. R.

    1999-01-01

    Balloon thermal endometrial destruction (TED) is a recently developed method to treat heavy menstrual bleeding (menorrhagia). Numerical simulations of this treatment by use of the finite element method were performed. The mechanical deformation and the resulting stress distribution when a balloon catheter is expanded within the uterine cavity was estimated from structural analysis. Thermal analysis was then performed to estimate the depth of tissue coagulation (temperature > 55 degree(s)C) in the uterus during TED. The estimated depth of coagulation, after 30 min heating with an intracavity temperature of 75 degree(s)C, was approximately 9 mm when blood flow was disregarded. With uniform normal blood flow, the depth of coagulation decreased to 3 - 4 mm. Simulations with varying intracavity temperatures and blood flow rates showed that both parameters should be of major importance to the depth of coagulation. The influence of blood flow was less when the pressure due to the balloon was also considered (5 - 6 mm coagulation depth with normal blood flow).

  2. Using computational modeling of river flow with remotely sensed data to infer channel bathymetry

    USGS Publications Warehouse

    Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.

    2012-01-01

    As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.

  3. Estimation of Sea Ice Thickness Distributions through the Combination of Snow Depth and Satellite Laser Altimetry Data

    NASA Technical Reports Server (NTRS)

    Kurtz, Nathan T.; Markus, Thorsten; Cavalieri, Donald J.; Sparling, Lynn C.; Krabill, William B.; Gasiewski, Albin J.; Sonntag, John G.

    2009-01-01

    Combinations of sea ice freeboard and snow depth measurements from satellite data have the potential to provide a means to derive global sea ice thickness values. However, large differences in spatial coverage and resolution between the measurements lead to uncertainties when combining the data. High resolution airborne laser altimeter retrievals of snow-ice freeboard and passive microwave retrievals of snow depth taken in March 2006 provide insight into the spatial variability of these quantities as well as optimal methods for combining high resolution satellite altimeter measurements with low resolution snow depth data. The aircraft measurements show a relationship between freeboard and snow depth for thin ice allowing the development of a method for estimating sea ice thickness from satellite laser altimetry data at their full spatial resolution. This method is used to estimate snow and ice thicknesses for the Arctic basin through the combination of freeboard data from ICESat, snow depth data over first-year ice from AMSR-E, and snow depth over multiyear ice from climatological data. Due to the non-linear dependence of heat flux on ice thickness, the impact on heat flux calculations when maintaining the full resolution of the ICESat data for ice thickness estimates is explored for typical winter conditions. Calculations of the basin-wide mean heat flux and ice growth rate using snow and ice thickness values at the 70 m spatial resolution of ICESat are found to be approximately one-third higher than those calculated from 25 km mean ice thickness values.

  4. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    PubMed

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  5. Statistical comparison of methods for estimating sediment thickness from Horizontal-to-Vertical Spectral Ratio (HVSR) seismic methods: An example from Tylerville, Connecticut, USA

    USGS Publications Warehouse

    Johnson, Carole D.; Lane, John W.

    2016-01-01

    Determining sediment thickness and delineating bedrock topography are important for assessing groundwater availability and characterizing contamination sites. In recent years, the horizontal-to-vertical spectral ratio (HVSR) seismic method has emerged as a non-invasive, cost-effective approach for estimating the thickness of unconsolidated sediments above bedrock. Using a three-component seismometer, this method uses the ratio of the average horizontal- and vertical-component amplitude spectrums to produce a spectral ratio curve with a peak at the fundamental resonance frequency. The HVSR method produces clear and repeatable resonance frequency peaks when there is a sharp contrast (>2:1) in acoustic impedance at the sediment/bedrock boundary. Given the resonant frequency, sediment thickness can be determined either by (1) using an estimate of average local sediment shear-wave velocity or by (2) application of a power-law regression equation developed from resonance frequency observations at sites with a range of known depths to bedrock. Two frequently asked questions about the HVSR method are (1) how accurate are the sediment thickness estimates? and (2) how much do sediment thickness/bedrock depth estimates change when using different published regression equations? This paper compares and contrasts different approaches for generating HVSR depth estimates, through analysis of HVSR data acquired in the vicinity of Tylerville, Connecticut, USA.

  6. Passive optical remote sensing of Congo River bathymetry using Landsat

    NASA Astrophysics Data System (ADS)

    Ache Rocha Lopes, V.; Trigg, M. A.; O'Loughlin, F.; Laraque, A.

    2014-12-01

    While there have been notable advances in deriving river characteristics such as width, using satellite remote sensing datasets, deriving river bathymetry remains a significant challenge. Bathymetry is fundamental to hydrodynamic modelling of river systems and being able to estimate this parameter remotely would be of great benefit, especially when attempting to model hard to access areas where the collection of field data is difficult. One such region is the Congo Basin, where due to past political instability and large scale there are few studies that characterise river bathymetry. In this study we test whether it is possible to use passive optical remote sensing to estimate the depth of the Congo River using Landsat 8 imagery in the region around Malebo Pool, located just upstream of the Kinshasa gauging station. Methods of estimating bathymetry using remotely sensed datasets have been used extensively for coastal regions and now more recently have been demonstrated as feasible for optically shallow rivers. Previous river bathymetry studies have focused on shallow rivers and have generally used aerial imagery with a finer spatial resolution than Landsat. While the Congo River has relatively low suspended sediment concentration values the application of passive bathymetry estimation to a river of this scale has not been attempted before. Three different analysis methods are tested in this study: 1) a single band algorithm; 2) a log ratio method; and 3) a linear transform method. All three methods require depth data for calibration and in this study area bathymetry measurements are available for three cross-sections resulting in approximately 300 in-situ measurements of depth, which are used in the calibration and validation. The performance of each method is assessed, allowing the feasibility of passive depth measurement in the Congo River to be determined. Considering the scarcity of in-situ bathymetry measurements on the Congo River, even an approximate estimate of depths from these methods will be of considerable value in its hydraulic characterisation.

  7. Wrinkle Ridge Detachment Depth and Undetected Shortening at Solis Planum, Mars

    NASA Astrophysics Data System (ADS)

    Colton, S. L.; Smart, K. J.; Ferrill, D. A.

    2006-03-01

    Martian wrinkle ridges have estimated detachment depths of 0.25 to 60 km. Our alternative method for determining detachment depth reveals differences and has implications for the predominant scale of deformation at Solis Planum.

  8. A simple accurate chest-compression depth gauge using magnetic coils during cardiopulmonary resuscitation

    NASA Astrophysics Data System (ADS)

    Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio

    2015-12-01

    This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).

  9. Rules of Thumb for Depth of Investigation, Pseudo-Position and Resolution of the Electrical Resistivity Method from Analysis of the Moments of the Sensitivity Function for a Homogeneous Half-Space

    NASA Astrophysics Data System (ADS)

    Butler, S. L.

    2017-12-01

    The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.

  10. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  11. Materials characterization on efforts for ablative materials

    NASA Technical Reports Server (NTRS)

    Tytula, Thomas P.; Schad, Kristin C.; Swann, Myles H.

    1992-01-01

    Experimental efforts to develop a new procedure to measure char depth in carbon phenolic nozzle material are described. Using a Shor Type D Durometer, hardness profiles were mapped across post fired sample blocks and specimens from a fired rocket nozzle. Linear regression was used to estimate the char depth. Results are compared to those obtained from computed tomography in a comparative experiment. There was no significant difference in the depth estimates obtained by the two methods.

  12. Long-term erythemal UV doses at Sodankylä estimated using total ozone, sunshine duration and snow depth

    NASA Astrophysics Data System (ADS)

    Lindfors, A. V.; Arola, A.; Kaurola, J.; Taalas, P.; Svenøe, T.

    2003-04-01

    A method for estimating daily erythemal UV doses using total ozone, sunshine duration and snow depth has been developed. The method consists of three steps: (1) daily clear-sky UV doses were simulated using the UVSPEC radiative transfer program, with daily values of total ozone as input data, (2) an empirical relationship was sought between the simulated clear-sky UV doses, the measured UV doses and the duration of bright sunshine, (3) daily erythemal UV doses were estimated using this relationship. The method accounts for the varying surface albedo by dividing the period of interest into winter and summer days, depending on the snow depth. Using this method, the daily erythemal UV doses at Sodankylä were estimated for the period 1950--99. This was done using Tromsø's total ozone together with Sodankylä's own sunshine duration and snow depth as input data. Although the method is fairly simple, the results are in good agreement, even on the daily scale, with the UV radiation measured with the Brewer spectrophotometer at Sodankylä. Statistically significant increasing trends in erythemal UV doses of a few percents per decade over the period 1950--99 were found for March and April, suggesting a connection to the stratospheric ozone depletion. For July, on the other hand, a significant decreasing trend of about 3% per decade, supported by the changes in both total ozone and sunshine duration, was found. The produced data set of erythemal UV doses is the longest time series of estimated UV known to the authors.

  13. Design rainfall depth estimation through two regional frequency analysis methods in Hanjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Ping; Yu, Chaofeng; Zhang, Xujie; Zhang, Qingqing; Xu, Xiao

    2012-02-01

    Hydrological predictions in ungauged basins are of significant importance for water resources management. In hydrological frequency analysis, regional methods are regarded as useful tools in estimating design rainfall/flood for areas with only little data available. The purpose of this paper is to investigate the performance of two regional methods, namely the Hosking's approach and the cokriging approach, in hydrological frequency analysis. These two methods are employed to estimate 24-h design rainfall depths in Hanjiang River Basin, one of the largest tributaries of Yangtze River, China. Validation is made through comparing the results to those calculated from the provincial handbook approach which uses hundreds of rainfall gauge stations. Also for validation purpose, five hypothetically ungauged sites from the middle basin are chosen. The final results show that compared to the provincial handbook approach, the Hosking's approach often overestimated the 24-h design rainfall depths while the cokriging approach most of the time underestimated. Overall, the Hosking' approach produced more accurate results than the cokriging approach.

  14. Optical Estimation of Depth and Current in a Ebb Tidal Delta Environment

    NASA Astrophysics Data System (ADS)

    Holman, R. A.; Stanley, J.

    2012-12-01

    A key limitation to our ability to make nearshore environmental predictions is the difficulty of obtaining up-to-date bathymetry measurements at a reasonable cost and frequency. Due to the high cost and complex logistics of in-situ methods, research into remote sensing approaches has been steady and has finally yielded fairly robust methods like the cBathy algorithm for optical Argus data that show good performance on simple barred beach profiles and near immunity to noise and signal problems. In May, 2012, data were collected in a more complex ebb tidal delta environment during the RIVET field experiment at New River Inlet, NC. The presence of strong reversing tidal currents led to significant errors in cBathy depths that were phase-locked to the tide. In this paper we will test methods for the robust estimation of both depths and vector currents in a tidal delta domain. In contrast to previous Fourier methods, wavenumber estimation in cBathy can be done on small enough scales to resolve interesting nearshore features.

  15. The determination of total burn surface area: How much difference?

    PubMed

    Giretzlehner, M; Dirnberger, J; Owen, R; Haller, H L; Lumenta, D B; Kamolz, L-P

    2013-09-01

    Burn depth and burn size are crucial determinants for assessing patients suffering from burns. Therefore, a correct evaluation of these factors is optimal for adapting the appropriate treatment in modern burn care. Burn surface assessment is subject to considerable differences among clinicians. This work investigated the accuracy among experts based on conventional surface estimation methods (e.g. "Rule of Palm", "Rule of Nines" or "Lund-Browder Chart"). The estimation results were compared to a computer-based evaluation method. Survey data was collected during one national and one international burn conference. The poll confirmed deviations of burn depth/size estimates of up to 62% in relation to the mean value of all participants. In comparison to the computer-based method, overestimation of up to 161% was found. We suggest introducing improved methods for burn depth/size assessment in clinical routine in order to efficiently allocate and distribute the available resources for practicing burn care. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.

  16. Detecting overpressure using the Eaton and Equivalent Depth methods in Offshore Nova Scotia, Canada

    NASA Astrophysics Data System (ADS)

    Ernanda; Primasty, A. Q. T.; Akbar, K. A.

    2018-03-01

    Overpressure is an abnormal high subsurface pressure of any fluids which exceeds the hydrostatic pressure of column of water or formation brine. In Offshore Nova Scotia Canada, the values and depth of overpressure zone are determined using the eaton and equivalent depth method, based on well data and the normal compaction trend analysis. Since equivalent depth method is using effective vertical stress principle and Eaton method considers physical property ratio (velocity). In this research, pressure evaluation only applicable on Penobscot L-30 well. An abnormal pressure is detected at depth 11804 feet as possibly overpressure zone, based on pressure gradient curve and calculation between the Eaton method (7241.3 psi) and Equivalent Depth method (6619.4 psi). Shales within Abenaki formation especially Baccaro Member is estimated as possible overpressure zone due to hydrocarbon generation mechanism.

  17. Bayesian depth estimation from monocular natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  18. Profilometric characterization of DOEs with continuous microrelief

    NASA Astrophysics Data System (ADS)

    Korolkov, V. P.; Ostapenko, S. V.; Shimansky, R. V.

    2008-09-01

    Methodology of local characterization of continuous-relief diffractive optical elements has been discussed. The local profile depth can be evaluated using "approximated depth" defined without taking a profile near diffractive zone boundaries into account. Several methods to estimate the approximated depth have been offered.

  19. Magnetic Basement Depth Inversion in the Space Domain

    NASA Astrophysics Data System (ADS)

    Nunes, Tiago Mane; Barbosa, Valéria Cristina F.; Silva, João Batista C.

    2008-10-01

    We present a total-field anomaly inversion method to determine both the basement relief and the magnetization direction (inclination and declination) of a 2D sedimentary basin presuming negligible sediment magnetization. Our method assumes that the magnetic intensity contrast is constant and known. We use a nonspectral approach based on approximating the vertical cross section of the sedimentary basin by a polygon, whose uppermost vertices are forced to coincide with the basin outcrop, which are presumably known. For fixed values of the x coordinates our method estimates the z coordinates of the unknown polygon vertices. To obtain the magnetization direction we assume that besides the total-field anomaly, information about the basement’s outcrops at the basin borders and the basement depths at a few points is available. To obtain stable depth-to-basement estimates we impose overall smoothness and positivity constraints on the parameter estimates. Tests on synthetic data showed that the simultaneous estimation of the irregular basement relief and the magnetization direction yields good estimates for the relief despite the mild instability in the magnetization direction. The inversion of aeromagnetic data from the onshore Almada Basin, Brazil, revealed a shallow, eastward-dipping basement basin.

  20. Total generalized variation-regularized variational model for single image dehazing

    NASA Astrophysics Data System (ADS)

    Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen

    2018-04-01

    Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.

  1. The Depth of Ice Inside the Smallest Cold-Traps on Mercury: Implications for Age and Origin

    NASA Astrophysics Data System (ADS)

    Rubanenko, L.; Mazarico, E.; Neumann, G. A.; Paige, D. A.

    2018-05-01

    We use Mercury Laser Altimeter data and an illumination model to constrain the depth of the smallest ice deposits on Mercury. By comparing this depth to modeled gardening rates, we estimate the age and delivery method of this ice.

  2. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    PubMed

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  3. Evaluation of using a depth sensor to estimate the weight of finishing pigs

    USDA-ARS?s Scientific Manuscript database

    A method of continuously monitoring weight would aid producers by ensuring all pigs are healthy (gaining weight) and increasing precision of marketing. Therefore, the objective was to develop an electronic method of obtaining pig weights through depth images. Seven hundred and seventy-two images and...

  4. The Use of an Intra-Articular Depth Guide in the Measurement of Partial Thickness Rotator Cuff Tears

    PubMed Central

    Carroll, Michael J.; More, Kristie D.; Sohmer, Stephen; Nelson, Atiba A.; Sciore, Paul; Boorman, Richard; Hollinshead, Robert; Lo, Ian K. Y.

    2013-01-01

    Purpose. The purpose of this study was to compare the accuracy of the conventional method for determining the percentage of partial thickness rotator cuff tears to a method using an intra-articular depth guide. The clinical utility of the intra-articular depth guide was also examined. Methods. Partial rotator cuff tears were created in cadaveric shoulders. Exposed footprint, total tendon thickness, and percentage of tendon thickness torn were determined using both techniques. The results from the conventional and intra-articular depth guide methods were correlated with the true anatomic measurements. Thirty-two patients were evaluated in the clinical study. Results. Estimates of total tendon thickness (r = 0.41, P = 0.31) or percentage of thickness tears (r = 0.67, P = 0.07) using the conventional method did not correlate well with true tendon thickness. Using the intra-articular depth guide, estimates of exposed footprint (r = 0.92, P = 0.001), total tendon thickness (r = 0.96, P = 0.0001), and percentage of tendon thickness torn (r = 0.88, P = 0.004) correlated with true anatomic measurements. Seven of 32 patients had their treatment plan altered based on the measurements made by the intra-articular depth guide. Conclusions. The intra-articular depth guide appeared to better correlate with true anatomic measurements. It may be useful during the evaluation and development of treatment plans for partial thickness articular surface rotator cuff tears. PMID:23533789

  5. Shallow water bathymetry correction using sea bottom classification with multispectral satellite imagery

    NASA Astrophysics Data System (ADS)

    Kazama, Yoriko; Yamamoto, Tomonori

    2017-10-01

    Bathymetry at shallow water especially shallower than 15m is an important area for environmental monitoring and national defense. Because the depth of shallow water is changeable by the sediment deposition and the ocean waves, the periodic monitoring at shoe area is needed. Utilization of satellite images are well matched for widely and repeatedly monitoring at sea area. Sea bottom terrain model using by remote sensing data have been developed and these methods based on the radiative transfer model of the sun irradiance which is affected by the atmosphere, water, and sea bottom. We adopted that general method of the sea depth extraction to the satellite imagery, WorldView-2; which has very fine spatial resolution (50cm/pix) and eight bands at visible to near-infrared wavelengths. From high-spatial resolution satellite images, there is possibility to know the coral reefs and the rock area's detail terrain model which offers important information for the amphibious landing. In addition, the WorldView-2 satellite sensor has the band at near the ultraviolet wavelength that is transmitted through the water. On the other hand, the previous study showed that the estimation error by the satellite imagery was related to the sea bottom materials such as sand, coral reef, sea alga, and rocks. Therefore, in this study, we focused on sea bottom materials, and tried to improve the depth estimation accuracy. First, we classified the sea bottom materials by the SVM method, which used the depth data acquired by multi-beam sonar as supervised data. Then correction values in the depth estimation equation were calculated applying the classification results. As a result, the classification accuracy of sea bottom materials was 93%, and the depth estimation error using the correction by the classification result was within 1.2m.

  6. Modeling an exhumed basin: A method for estimating eroded overburden

    USGS Publications Warehouse

    Poelchau, H.S.

    2001-01-01

    The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.

  7. Regionalization of precipitation characteristics in Montana using L-moments

    USGS Publications Warehouse

    Parrett, C.

    1998-01-01

    Dimensionless precipitation-frequency curves for estimating precipitation depths having small exceedance probabilities were developed for 2-, 6-, and 24-hour storm durations for three homogeneous regions in Montana. L-moment statistics were used to help define the homogeneous regions. The generalized extreme value distribution was used to construct the frequency curves for each duration within each region. The effective record length for each duration in each region was estimated using a graphical method and was found to range from 500 years for 6-hour duration data in Region 2 to 5,100 years for 24-hour duration data in Region 3. The temporal characteristics of storms were analyzed, and methods for estimating synthetic storm hyetographs were developed. Dimensionless depth-duration data were grouped by independent duration (2,6, and 24 hours) and by region, and the beta distribution was fit to dimensionless depth data for various incremental time intervals. Ordinary least-squares regression was used to develop relations between dimensionless depths for a key, short duration - termed the kernel duration - and dimensionless depths for other durations. The regression relations were used, together with the probabilistic dimensionless depth data for the kernel duration, to calculate dimensionless depth-duration curves for exceedance probabilities from .1 to .9. Dimensionless storm hyetographs for each independent duration in each region were constructed for median value conditions based on an exceedance probability of .5.

  8. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis

    PubMed Central

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687

  9. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-06-28

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.

  10. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  11. Long-term erythemal UV doses at Sodankylä estimated using total ozone, sunshine duration, and snow depth

    NASA Astrophysics Data System (ADS)

    Lindfors, A. V.; Arola, A.; Kaurola, J.; Taalas, P.; SvenøE, T.

    2003-08-01

    A method for estimating daily erythemal UV doses using total ozone, sunshine duration, and snow depth has been developed. The method consists of three steps: (1) daily clear-sky UV doses were simulated using the UVSPEC radiative transfer program, with daily values of total ozone as input data, (2) an empirical relationship was sought between the simulated clear-sky UV doses, the measured UV doses, and the duration of bright sunshine, and (3) daily erythemal UV doses were estimated using this relationship. The method accounts for the varying surface albedo by dividing the period of interest into winter and summer days, depending on the snow depth. Using this method, the daily erythemal UV doses at Sodankylä were estimated for the period 1950-1999. This was done using Tromsø's total ozone together with Sodankylä's own sunshine duration and snow depth as input data. Although the method is fairly simple, the results are in good agreement, even on the daily scale, with the UV radiation measured with the Brewer spectrophotometer at Sodankylä. Over the period 1950-1999 a statistically significant increasing trend of 3.9% per decade in erythemal UV doses was found for March. The fact that this trend is much more pronounced during the latter part of the period, which is also the case for April, suggests a connection to the stratospheric ozone depletion. For July, on the other hand, a significant decreasing trend of 3.3% per decade, supported by the changes in both total ozone and sunshine duration, was found.

  12. The volume and mean depth of Earth's lakes

    NASA Astrophysics Data System (ADS)

    Cael, B. B.; Heathcote, A. J.; Seekell, D. A.

    2017-01-01

    Global lake volume estimates are scarce, highly variable, and poorly documented. We developed a rigorous method for estimating global lake depth and volume based on the Hurst coefficient of Earth's surface, which provides a mechanistic connection between lake area and volume. Volume-area scaling based on the Hurst coefficient is accurate and consistent when applied to lake data sets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3). This volume is in the range of historical estimates (166,000-280,000 km3), but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62-151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles.

  13. Teleseismic depth estimation of the 2015 Gorkha-Nepal aftershocks

    NASA Astrophysics Data System (ADS)

    Letort, Jean; Bollinger, Laurent; Lyon-Caen, Helene; Guilhem, Aurélie; Cano, Yoann; Baillard, Christian; Adhikari, Lok Bijaya

    2016-12-01

    The depth of 61 aftershocks of the 2015 April 25 Gorkha, Nepal earthquake, that occurred within the first 20 d following the main shock, is constrained using time delays between teleseismic P phases and depth phases (pP and sP). The detection and identification of these phases are automatically processed using the cepstral method developed by Letort et al., and are validated with computed radiation patterns from the most probable focal mechanisms. The events are found to be relatively shallow (13.1 ± 3.9 km). Because depth estimations could potentially be biased by the method, velocity model or selected data, we also evaluate the depth resolution of the events from local catalogues by extracting 138 events with assumed well-constrained depth estimations. Comparison between the teleseismic depths and the depths from local and regional catalogues helps decrease epistemic uncertainties, and shows that the seismicity is clustered in a narrow band between 10 and 15 km depth. Given the geometry and depth of the major tectonic structures, most aftershocks are probably located in the immediate vicinity of the Main Himalayan Thrust (MHT) shear zone. The mid-crustal ramp of the flat/ramp MHT system is not resolved indicating that its height is moderate (less than 5-10 km) in the trace of the sections that ruptured on April 25. However, the seismicity depth range widens and deepens through an adjacent section to the east, a region that failed on 2015 May 12 during an Mw 7.3 earthquake. This deeper seismicity could reflect a step-down of the basal detachment of the MHT, a lateral structural variation which probably acted as a barrier to the dynamic rupture propagation.

  14. Estimating the Amount of Eroded Section in a Partially Exhumed Basin from Geophysical Well Logs: An Example from the North Slope

    USGS Publications Warehouse

    Burns, W. Matthew; Hayba, Daniel O.; Rowan, Elisabeth L.; Houseknecht, David W.

    2007-01-01

    The reconstruction of burial and thermal histories of partially exhumed basins requires an estimation of the amount of erosion that has occurred since the time of maximum burial. We have developed a method for estimating eroded thickness by using porosity-depth trends derived from borehole sonic logs of wells in the Colville Basin of northern Alaska. Porosity-depth functions defined from sonic-porosity logs in wells drilled in minimally eroded parts of the basin provide a baseline for comparison with the porosity-depth trends observed in other wells across the basin. Calculated porosities, based on porosity-depth functions, were fitted to the observed data in each well by varying the amount of section assumed to have been eroded from the top of the sedimentary column. The result is an estimate of denudation at the wellsite since the time of maximum sediment accumulation. Alternative methods of estimating exhumation include fission-track analysis and projection of trendlines through vitrinite-reflectance profiles. In the Colville Basin, the methodology described here provides results generally similar to those from fission-track analysis and vitrinite-reflectance profiles, but with greatly improved spatial resolution relative to the published fission-track data and with improved reliability relative to the vitrinite-reflectance data. In addition, the exhumation estimates derived from sonic-porosity logs are independent of the thermal evolution of the basin, allowing these estimates to be used as independent variables in thermal-history modeling.

  15. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  16. A new method of Curie depth evaluation from magnetic data: Theory

    NASA Technical Reports Server (NTRS)

    Won, I. J. (Principal Investigator)

    1981-01-01

    An approach to estimating the Curie point isotherm uses the classical Gauss method inverting a system of nonlinear equations. The method, slightly modified by a differential correction technique, directly inverts filtered Magsat data to calculate the crustal structure above the Curie depth, which is modeled as a magnetized layer of varying thickness and susceptibility. Since the depth below the layer is assumed to be nonmagnetic, the bottom of the layer is interpreted as the Curie depth. The method, once fully developed, tested, and compared with previous work by others, is to be applied to a portion of the eastern U.S. when sufficient Magsat data are accumulated for the region.

  17. Prediction of soil frost penetration depth in northwest of Iran using air freezing indices

    NASA Astrophysics Data System (ADS)

    Mohammadi, H.; Moghbel, M.; Ranjbar, F.

    2016-11-01

    Information about soil frost penetration depth can be effective in finding appropriate solutions to reduce the agricultural crop damage, transportations, and building facilities. Amongst proper methods to achieve this information are the statistical and empirical models capable of estimating soil frost penetration depth. Therefore, the main objective of this research is to calculate soil frost penetration depth in northwest of Iran during the year 2007-2008 to validate two different models accuracy. To do so, the relationship between air and soil temperature in different depths (5-10-20-30-50-100 cm) at three times of the day (3, 9, and 15 GMT) for 14 weather stations over 7 provinces was analyzed using linear regression. Then, two different air freezing indices (AFIs) including Norwegian and Finn AFI was implemented. Finally, the frost penetration depth was calculated by McKeown method and the accuracy of models determined by actual soil frost penetration depth. The results demonstrated that there is a significant correlation between air and soil depth temperature in all studied stations up to the 30 cm under the surface. Also, according to the results, Norwegian index can be effectively used for determination of soil frost depth penetration and the correlation coefficient between actual and estimated soil frost penetration depth is r = 0.92 while the Finn index overestimates the frost depth in all stations with correlation coefficient r = 0.70.

  18. Methods of Estimating Initial Crater Depths on Icy Satellites using Stereo Topography

    NASA Astrophysics Data System (ADS)

    Persaud, D. M.; Phillips, C. B.

    2014-12-01

    Stereo topography, combined with models of viscous relaxation of impact craters, allows for the study of the rheology and thermal history of icy satellites. An important step in calculating relaxation of craters is determining the initial depths of craters before viscous relaxation. Two methods for estimating initial crater depths on the icy satellites of Saturn have been previously discussed. White and Schenk (2013) present the craters of Iapetus as relatively unrelaxed in modeling the relaxation of craters of Rhea. Phillips et al. (2013) assume that Herschel crater on Saturn's satellite Mimas is unrelaxed in relaxation calculations and models of Rhea and Dione. In the second method, the depth of Herschel crater is scaled based on the different crater diameters and the difference in surface gravity on the large moons to predict the initial crater depths for Rhea and Dione. In the first method, since Iapetus is of similar size to Dione and Rhea, no gravity scaling is necessary; craters of similar size on Iapetus were chosen and their depths measured to determine the appropriate initial crater depths for Rhea. We test these methods by first extracting topographic profiles of impact craters on Iapetus from digital elevation models (DEMs) constructed from stereo images from the Cassini ISS instrument. We determined depths from these profiles and used them to calculate initial crater depths and relaxation percentages for Rhea and Dione craters using the methods described above. We first assumed that craters on Iapetus were relaxed, and compared the results to previously calculated relaxation percentages for Rhea and Dione relative to Herschel crater (with appropriate scaling for gravity and crater diameter). We then tested the assumption that craters on Iapetus were unrelaxed and used our new measurements of crater depth to determine relaxation percentages for Dione and Rhea. We will present results and conclusions from both methods and discuss their efficacy for determining initial crater depth. References: Phillips, C.B., et al. (2013). Lunar Planet Sci. XLIV, abstract 2766. White, O.L., and P.L. Schenk. Icarus 23, 699-709, 2013. This work was supported by the NASA Outer Planets Research Program grant NNX10AQ09G and by the NSF REU Program.

  19. Determination of the maximum-depth to potential field sources by a maximum structural index method

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  20. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  1. Surface Flashover of Semiconductors: A Fundamental Study

    DTIC Science & Technology

    1993-06-16

    surface electric fields for a number of samples with aluminum and gold contacts. Effects of processing varia- tions such as anneal method (rapid thermal...more uniform pre- breakdown surface fields. 3. Various contact materials and processing methods were used to determine effects on flashover...diffusion depths determined by this method were generally consistent with the estimated depths. 2-4 In order to characterize better the diffused layers

  2. Determining the source characteristics of explosions near the Earth's surface

    DOE PAGES

    Pasyanos, Michael E.; Ford, Sean R.

    2015-04-09

    We present a method to determine the source characteristics of explosions near the airearth interface. The technique is an extension of the regional amplitude envelope method and now accounts for the reduction of seismic amplitudes as the depth of the explosion approaches the free surface and less energy is coupled into the ground. We first apply the method to the Humming Roadrunner series of shallow explosions in New Mexico where the yields and depths are known. From these tests, we find an appreciation of knowing the material properties for both source coupling/excitation and the free surface effect. Although there ismore » the expected tradeoff between depth and yield due to coupling effects, the estimated yields are generally close to the known values when the depth is constrained to the free surface. We then apply the method to a regionally recorded explosion in Syria. We estimate an explosive yield less than the 60 tons claimed by sources in the open press. The modifications to the method allow us to apply the technique to new classes of events, but we will need a better understanding of explosion source models and properties of additional geologic materials.« less

  3. Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.

    2018-03-01

    Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2D seismic reflection data processing flow focused on pre - stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching (BHM), to estimate the uncertainties of the depths of key horizons near the borehole DSDP-258 located in the Mentelle Basin, south west of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ± 2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program (IODP), leg 369.

  4. Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.

    2018-06-01

    Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2-D seismic reflection data processing flow focused on pre-stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching, to estimate the uncertainties of the depths of key horizons near the Deep Sea Drilling Project (DSDP) borehole 258 (DSDP-258) located in the Mentelle Basin, southwest of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ±2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent to the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program, leg 369.

  5. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light

    PubMed Central

    Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning

    2017-01-01

    Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759

  6. Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods

    DOE PAGES

    Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste; ...

    2017-04-03

    This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less

  7. Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste

    This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less

  8. Mapping Curie temperature depth in the western United States with a fractal model for crustal magnetization

    USGS Publications Warehouse

    Bouligand, C.; Glen, J.M.G.; Blakely, R.J.

    2009-01-01

    We have revisited the problem of mapping depth to the Curie temperature isotherm from magnetic anomalies in an attempt to provide a measure of crustal temperatures in the western United States. Such methods are based on the estimation of the depth to the bottom of magnetic sources, which is assumed to correspond to the temperature at which rocks lose their spontaneous magnetization. In this study, we test and apply a method based on the spectral analysis of magnetic anomalies. Early spectral analysis methods assumed that crustal magnetization is a completely uncorrelated function of position. Our method incorporates a more realistic representation where magnetization has a fractal distribution defined by three independent parameters: the depths to the top and bottom of magnetic sources and a fractal parameter related to the geology. The predictions of this model are compatible with radial power spectra obtained from aeromagnetic data in the western United States. Model parameters are mapped by estimating their value within a sliding window swept over the study area. The method works well on synthetic data sets when one of the three parameters is specified in advance. The application of this method to western United States magnetic compilations, assuming a constant fractal parameter, allowed us to detect robust long-wavelength variations in the depth to the bottom of magnetic sources. Depending on the geologic and geophysical context, these features may result from variations in depth to the Curie temperature isotherm, depth to the mantle, depth to the base of volcanic rocks, or geologic settings that affect the value of the fractal parameter. Depth to the bottom of magnetic sources shows several features correlated with prominent heat flow anomalies. It also shows some features absent in the map of heat flow. Independent geophysical and geologic data sets are examined to determine their origin, thereby providing new insights on the thermal and geologic crustal structure of the western United States.

  9. Epidural Catheter Placement in Morbidly Obese Parturients with the Use of an Epidural Depth Equation prior to Ultrasound Visualization

    PubMed Central

    Singh, Sukhdip; Wirth, Keith M.; Phelps, Amy L.; Badve, Manasi H.; Shah, Tanmay H.; Vallejo, Manuel C.

    2013-01-01

    Background. Previously, Balki determined the Pearson correlation coefficient with the use of ultrasound (US) was 0.85 in morbidly obese parturients. We aimed to determine if the use of the epidural depth equation (EDE) in conjunction with US can provide better clinical correlation in estimating the distance from the skin to the epidural space in morbidly obese parturients. Methods. One hundred sixty morbidly obese (≥40 kg/m2) parturients requesting labor epidural analgesia were enrolled. Before epidural catheter placement, EDE was used to estimate depth to the epidural space. This estimation was used to help visualize the epidural space with the transverse and midline longitudinal US views and to measure depth to epidural space. The measured epidural depth was made available to the resident trainee before needle insertion. Actual needle depth (ND) to the epidural space was recorded. Results. Pearson's correlation coefficients comparing actual (ND) versus US estimated depth to the epidural space in the longitudinal median and transverse planes were 0.905 (95% CI: 0.873 to 0.929) and 0.899 (95% CI: 0.865 to 0.925), respectively. Conclusion. Use of the epidural depth equation (EDE) in conjunction with the longitudinal and transverse US views results in better clinical correlation than with the use of US alone. PMID:23983645

  10. Fast nonlinear gravity inversion in spherical coordinates with application to the South American Moho

    NASA Astrophysics Data System (ADS)

    Uieda, Leonardo; Barbosa, Valéria C. F.

    2017-01-01

    Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.

  11. Sparse estimation of model-based diffuse thermal dust emission

    NASA Astrophysics Data System (ADS)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  12. Spectroscopic determination of leaf biochemistry using band-depth analysis of absorption features and stepwise multiple linear regression

    USGS Publications Warehouse

    Kokaly, R.F.; Clark, R.N.

    1999-01-01

    We develop a new method for estimating the biochemistry of plant material using spectroscopy. Normalized band depths calculated from the continuum-removed reflectance spectra of dried and ground leaves were used to estimate their concentrations of nitrogen, lignin, and cellulose. Stepwise multiple linear regression was used to select wavelengths in the broad absorption features centered at 1.73 ??m, 2.10 ??m, and 2.30 ??m that were highly correlated with the chemistry of samples from eastern U.S. forests. Band depths of absorption features at these wavelengths were found to also be highly correlated with the chemistry of four other sites. A subset of data from the eastern U.S. forest sites was used to derive linear equations that were applied to the remaining data to successfully estimate their nitrogen, lignin, and cellulose concentrations. Correlations were highest for nitrogen (R2 from 0.75 to 0.94). The consistent results indicate the possibility of establishing a single equation capable of estimating the chemical concentrations in a wide variety of species from the reflectance spectra of dried leaves. The extension of this method to remote sensing was investigated. The effects of leaf water content, sensor signal-to-noise and bandpass, atmospheric effects, and background soil exposure were examined. Leaf water was found to be the greatest challenge to extending this empirical method to the analysis of fresh whole leaves and complete vegetation canopies. The influence of leaf water on reflectance spectra must be removed to within 10%. Other effects were reduced by continuum removal and normalization of band depths. If the effects of leaf water can be compensated for, it might be possible to extend this method to remote sensing data acquired by imaging spectrometers to give estimates of nitrogen, lignin, and cellulose concentrations over large areas for use in ecosystem studies.We develop a new method for estimating the biochemistry of plant material using spectroscopy. Normalized band depths calculated from the continuum-removed reflectance spectra of dried and ground leaves were used to estimate their concentrations of nitrogen, lignin, and cellulose. Stepwise multiple linear regression was used to select wavelengths in the broad absorption features centered at 1.73 ??m, 2.10 ??m, and 2.301 ??m that were highly correlated with the chemistry of samples from eastern U.S. forests. Band depths of absorption features at these wavelengths were found to also be highly correlated with the chemistry of four other sites. A subset of data from the eastern U.S. forest sites was used to derive linear equations that were applied to the remaining data to successfully estimate their nitrogen, lignin, and cellulose concentrations. Correlations were highest for nitrogen (R2 from 0.75 to 0.94). The consistent results indicate the possibility of establishing a single equation capable of estimating the chemical concentrations in a wide variety of species from the reflectance spectra of dried leaves. The extension of this method to remote sensing was investigated. The effects of leaf water content, sensor signal-to-noise and bandpass, atmospheric effects, and background soil exposure were examined. Leaf water was found to be the greatest challenge to extending this empirical method to the analysis of fresh whole leaves and complete vegetation canopies. The influence of leaf water on reflectance spectra must be removed to within 10%. Other effects were reduced by continuum removal and normalization of band depths. If the effects of leaf water can be compensated for, it might be possible to extend this method to remote sensing data acquired by imaging spectrometers to give estimates of nitrogen, lignin, and cellulose concentrations over large areas for use in ecosystem studies.

  13. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  14. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  15. Structure-aware depth super-resolution using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon

    2015-03-01

    This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

  16. Adaptive sound speed correction for abdominal ultrasonography: preliminary results

    NASA Astrophysics Data System (ADS)

    Jin, Sungmin; Kang, Jeeun; Song, Tai-Kyung; Yoo, Yangmo

    2013-03-01

    Ultrasonography has been conducting a critical role in assessing abdominal disorders due to its noninvasive, real-time, low cost, and deep penetrating capabilities. However, for imaging obese patients with a thick fat layer, it is challenging to achieve appropriate image quality with a conventional beamforming (CON) method due to phase aberration caused by the difference between sound speeds (e.g., 1580 and 1450m/s for liver and fat, respectively). For this, various sound speed correction (SSC) methods that estimate the accumulated sound speed for a region-of interest (ROI) have been previously proposed. However, with the SSC methods, the improvement in image quality was limited only for a specific depth of ROI. In this paper, we present the adaptive sound speed correction (ASSC) method, which can enhance the image quality for whole depths by using estimated sound speeds from two different depths in the lower layer. Since these accumulated sound speeds contain the respective contributions of layers, an optimal sound speed for each depth can be estimated by solving contribution equations. To evaluate the proposed method, the phantom study was conducted with pre-beamformed radio-frequency (RF) data acquired with a SonixTouch research package (Ultrasonix Corp., Canada) with linear and convex probes from the gel pad-stacked tissue mimicking phantom (Parker Lab. Inc., USA and Model539, ATS, USA) whose sound speeds are 1610 and 1450m/s, respectively. From the study, compared to the CON and SSC methods, the ASSC method showed the improved spatial resolution and information entropy contrast (IEC) for convex and linear array transducers, respectively. These results indicate that the ASSC method can be applied for enhancing image quality when imaging obese patients in abdominal ultrasonography.

  17. Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization

    PubMed Central

    Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun

    2015-01-01

    The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306

  18. Subsurface damage in some single crystalline optical materials.

    PubMed

    Randi, Joseph A; Lambropoulos, John C; Jacobs, Stephen D

    2005-04-20

    We present a nondestructive method for estimating the depth of subsurface damage (SSD) in some single crystalline optical materials (silicon, lithium niobate, calcium fluoride, magnesium fluoride, and sapphire); the method is established by correlating surface microroughness measurements, specifically, the peak-to-valley (p-v) microroughness, to the depth of SSD found by a novel destructive method. Previous methods for directly determining the depth of SSD may be insufficient when applied to single crystals that are very soft or very hard. Our novel destructive technique uses magnetorheological finishing to polish spots onto a ground surface. We find that p-v surface microroughness, appropriately scaled, gives an upper bound to SSD. Our data suggest that SSD in the single crystalline optical materials included in our study (deterministically microground, lapped, and sawed) is always less than 1.4 times the p-v surface microroughness found by white-light interferometry. We also discuss another way of estimating SSD based on the abrasive size used.

  19. Joint optic disc and cup boundary extraction from monocular fundus images.

    PubMed

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Nondestructive estimation of depth of surface opening cracks in concrete beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arne, Kevin; In, Chiwon; Kurtis, Kimberly

    Concrete is one of the most widely used construction materials and thus assessment of damage in concrete structures is of the utmost importance from both a safety point of view and a financial point of view. Of particular interest are surface opening cracks that extend through the concrete cover, as this can expose the steel reinforcement bars underneath and induce corrosion in them. This corrosion can lead to significant subsequent damage in concrete such as cracking and delamination of the cover concrete as well as rust staining on the surface of concrete. Concrete beams are designed and constructed in suchmore » a way to provide crack depths up to around 13 cm. Two different types of measurements are made in-situ to estimate depths of real surface cracks (as opposed to saw-cut notches) after unloading: one based on the impact-echo method and the other one based on the diffuse ultrasonic method. These measurements are compared to the crack depth visually observed on the sides of the beams. Discussions are given as to the advantages and disadvantages of each method.« less

  1. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  2. Necessary Sequencing Depth and Clustering Method to Obtain Relatively Stable Diversity Patterns in Studying Fish Gut Microbiota.

    PubMed

    Xiao, Fanshu; Yu, Yuhe; Li, Jinjin; Juneau, Philippe; Yan, Qingyun

    2018-05-25

    The 16S rRNA gene is one of the most commonly used molecular markers for estimating bacterial diversity during the past decades. However, there is no consistency about the sequencing depth (from thousand to millions of sequences per sample), and the clustering methods used to generate OTUs may also be different among studies. These inconsistent premises make effective comparisons among studies difficult or unreliable. This study aims to examine the necessary sequencing depth and clustering method that would be needed to ensure a stable diversity patterns for studying fish gut microbiota. A total number of 42 samples dataset of Siniperca chuatsi (carnivorous fish) gut microbiota were used to test how the sequencing depth and clustering may affect the alpha and beta diversity patterns of fish intestinal microbiota. Interestingly, we found that the sequencing depth (resampling 1000-11,000 per sample) and the clustering methods (UPARSE and UCLUST) did not bias the estimates of the diversity patterns during the fish development from larva to adult. Although we should acknowledge that a suitable sequencing depth may differ case by case, our finding indicates that a shallow sequencing such as 1000 sequences per sample may be also enough to reflect the general diversity patterns of fish gut microbiota. However, we have shown in the present study that strict pre-processing of the original sequences is required to ensure reliable results. This study provides evidences to help making a strong scientific choice of the sequencing depth and clustering method for future studies on fish gut microbiota patterns, but at the same time reducing as much as possible the costs related to the analysis.

  3. Estimate of Boundary-Layer Depth Over Beijing, China, Using Doppler Lidar Data During SURF-2015

    NASA Astrophysics Data System (ADS)

    Huang, Meng; Gao, Zhiqiu; Miao, Shiguang; Chen, Fei; LeMone, Margaret A.; Li, Ju; Hu, Fei; Wang, Linlin

    2017-03-01

    Planetary boundary-layer (PBL) structure was investigated using observations from a Doppler lidar and the 325-m Institute of Atmospheric Physics (IAP) meteorological tower in the centre of Beijing during the summer 2015 Study of Urban-impacts on Rainfall and Fog/haze (SURF-2015) field campaign. Using six fair-weather days of lidar and tower data under clear to cloudy skies, we evaluate the ability of the Doppler lidar to probe the urban boundary-layer structure, and then propose a composite method for estimating the diurnal cycle of the PBL depth using the Doppler lidar. For the convective boundary layer (CBL), a threshold method using vertical velocity variance (σ _w^2 >0.1 m2s^{-2}) is used, since it provides more reliable CBL depths than a conventional maximum wind-shear method. The nocturnal boundary-layer (NBL) depth is defined as the height at which σ _w^2 decreases to 10 % of its near-surface maximum minus a background variance. The PBL depths determined by combining these methods have average values ranging from ≈ 270 to ≈ 1500 m for the six days, with the greatest maximum depths associated with clear skies. Release of stored and anthropogenic heat contributes to the maintenance of turbulence until late evening, keeping the NBL near-neutral and deeper at night than would be expected over a natural surface. The NBL typically becomes more shallow with time, but grows in the presence of low-level nocturnal jets. While current results are promising, data over a broader range of conditions are needed to fully develop our PBL-depth algorithms.

  4. On Correlated-noise Analyses Applied to Exoplanet Light Curves

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison

    2017-01-01

    Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.

  5. Improving the Depth-Time Fit of Holocene Climate Proxy Measures by Increasing Coherence with a Reference Time-Series

    NASA Astrophysics Data System (ADS)

    Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.

    2007-12-01

    An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.

  6. Contamination in the MACHO data set and the puzzle of Large Magellanic Cloud microlensing

    NASA Astrophysics Data System (ADS)

    Griest, Kim; Thomas, Christian L.

    2005-05-01

    In a recent series of three papers, Belokurov, Evans & Le Du and Evans & Belokurov reanalysed the MACHO collaboration data and gave alternative sets of microlensing events and an alternative optical depth to microlensing towards the Large Magellanic Cloud (LMC). Although these authors examined less than 0.2 per cent of the data, they reported that by using a neural net program they had reliably selected a better (and smaller) set of microlensing candidates. Estimating the optical depth from this smaller set, they claimed that the MACHO collaboration overestimated the optical depth by a significant factor and that the MACHO microlensing experiment is consistent with lensing by known stars in the Milky Way and LMC. As we show below, the analysis by these authors contains several errors, and as a result their conclusions are incorrect. Their efficiency analysis is in error, and since they did not search through the entire MACHO data set, they do not know how many microlensing events their neural net would find in the data nor what optical depth their method would give. Examination of their selected events suggests that their method misses low signal-to-noise ratio events and thus would have lower efficiency than the MACHO selection criteria. In addition, their method is likely to give many more false positives (non-lensing events identified as lensing). Both effects would increase their estimated optical depth. Finally, we note that the EROS discovery that LMC event 23 is a variable star reduces the MACHO collaboration estimates of optical depth and the Macho halo fraction by around 8 per cent, and does open the question of additional contamination.

  7. Velocity gradients and reservoir volumes lessons in computational sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, P.W.

    1995-12-31

    The sensitivity of reservoir volume estimation from depth converted geophysical time maps to the velocity gradients employed is investigated through a simple model study. The computed volumes are disconcertingly sensitive to gradients, both horizontal and vertical. The need for an accurate method of time to depth conversion is well demonstrated by the model study in which errors in velocity are magnified 40 fold in the computation of the volume. Thus if +/- 10% accuracy in the volume is desired, we must be able to estimate the velocity at the water contact with 0.25% accuracy. Put another way, if the velocitymore » is 8000 feet per second at the well then we have only +/- 20 feet per second leeway in estimating the velocity at the water contact. Very moderate horizontal and vertical gradients would typically indicate a velocity change of a few hundred feet per second if they are in the same direction. Clearly the interpreter needs to by very careful. A methodology is demonstrated which takes into account all the information that is available, velocities, tops, depositional and lithologic spatial patterns, and common sense. It is assumed that through appropriate use of check shot and other time-depth information, that the interpreter has correctly tied the reflection picks to the well tops. Such ties are ordinarily too soft for direct time-depth conversion to give adequate depth ties. The proposed method uses a common compaction law as its basis and incorporates time picks, tops and stratigraphic maps into the depth conversion process. The resulting depth map ties the known well tops in an optimum fashion.« less

  8. Estimation of photosynthetically available radiation (PAR) from OCEANSAT-I OCM using a simple atmospheric radiative transfer model

    NASA Astrophysics Data System (ADS)

    Tripathy, Madhumita; Raman, Mini; Chauhan, Prakash

    2015-10-01

    Photosynthetically available radiation (PAR) is an important variable for radiation budget, marine and terrestrial ecosystem models. OCEANSAT-1 Ocean Color Monitor (OCM) PAR was estimated using two different methods under both clear and cloudy sky conditions. In the first approach, aerosol optical depth (AOD) and cloud optical depth (COD) were estimated from OCEANSAT-1 OCM TOA (top-of-atmosphere) radiance data on a pixel by pixel basis and PAR was estimated from extraterrestrial solar flux for fifteen spectral bands using a radiative transfer model. The second approach used TOA radiances measured by OCM in the PAR spectral range to compute PAR. This approach also included surface albedo and cloud albedo as inputs. Comparison between OCEANSAT-1 OCM PAR at noon with in situ measured PAR shows that root mean square difference was 5.82% for the method I and 7.24% for the method II in daily time scales. Results indicate that methodology adopted to estimate PAR from OCEANSAT-1 OCM can produce reasonably accurate PAR estimates over the tropical Indian Ocean region. This approach can be extended to OCEANSAT-2 OCM and future OCEANSAT-3 OCM data for operational estimation of PAR for regional marine ecosystem applications.

  9. Spectrally-Based Bathymetric Mapping of a Dynamic, Sand-Bedded Channel: Niobrara River, Nebraska, USA

    NASA Astrophysics Data System (ADS)

    Dilbone, Elizabeth K.

    Methods for spectrally-based bathymetric mapping of rivers mainly have been developed and tested on clear-flowing, gravel bedded channels, with limited application to turbid, sand-bedded rivers. Using hyperspectral images of the Niobrara River, Nebraska, and field-surveyed depth data, this study evaluated three methods of retrieving depth from remotely sensed data in a dynamic, sand-bedded channel. The first regression-based approach paired in situ depth measurements and image pixel values to predict depth via Optimal Band Ratio Analysis (OBRA). The second approach used ground-based reflectance measurements to calibrate an OBRA relationship. For this approach, CASI images were atmospherically corrected to units of apparent surface reflectance using an empirical line calibration. For the final technique, we used Image-to-Depth Quantile Transformation (IDQT) to predict depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image derived variable. OBRA yielded the lowest overall depth retrieval error (0.0047 m) and highest observed versus predicted R2 (0.81). Although misalignment between field and image data were not problematic to OBRA's performance in this study, such issues present potential limitations to standard regression-based approaches like OBRA in dynamic, sand-bedded rivers. Field spectroscopy-based maps exhibited a slight shallow bias (0.0652 m) but provided reliable depth estimates for most of the study reach. IDQT had a strong deep bias, but still provided informative relative depth maps that portrayed general patterns of shallow and deep areas of the channel. The over-prediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the CDF of depth. While each of the techniques tested in this study demonstrated the potential to provide accurate depth estimates in sand-bedded rivers, each method also was subject to certain constraints and limitations.

  10. A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion

    NASA Astrophysics Data System (ADS)

    Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen

    2017-09-01

    In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.

  11. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  12. Mapping the spatial distribution and activity of (226)Ra at legacy sites through Machine Learning interpretation of gamma-ray spectrometry data.

    PubMed

    Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike

    2016-03-01

    Radium ((226)Ra) contamination derived from military, industrial, and pharmaceutical products can be found at a number of historical sites across the world posing a risk to human health. The analysis of spectral data derived using gamma-ray spectrometry can offer a powerful tool to rapidly estimate and map the activity, depth, and lateral distribution of (226)Ra contamination covering an extensive area. Subsequently, reliable risk assessments can be developed for individual sites in a fraction of the timeframe compared to traditional labour-intensive sampling techniques: for example soil coring. However, local heterogeneity of the natural background, statistical counting uncertainty, and non-linear source response are confounding problems associated with gamma-ray spectral analysis. This is particularly challenging, when attempting to deal with enhanced concentrations of a naturally occurring radionuclide such as (226)Ra. As a result, conventional surveys tend to attribute the highest activities to the largest total signal received by a detector (Gross counts): an assumption that tends to neglect higher activities at depth. To overcome these limitations, a methodology was developed making use of Monte Carlo simulations, Principal Component Analysis and Machine Learning based algorithms to derive depth and activity estimates for (226)Ra contamination. The approach was applied on spectra taken using two gamma-ray detectors (Lanthanum Bromide and Sodium Iodide), with the aim of identifying an optimised combination of detector and spectral processing routine. It was confirmed that, through a combination of Neural Networks and Lanthanum Bromide, the most accurate depth and activity estimates could be found. The advantage of the method was demonstrated by mapping depth and activity estimates at a case study site in Scotland. There the method identified significantly higher activity (<3 Bq g(-1)) occurring at depth (>0.4m), that conventional gross counting algorithms failed to identify. It was concluded that the method could easily be employed to identify areas of high activity potentially occurring at depth, prior to intrusive investigation using conventional sampling techniques. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m

    NASA Astrophysics Data System (ADS)

    Czarnota, Karol; Gorbatov, Alexei

    2016-04-01

    In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single seismometer, without a priori knowledge of the surface-wave velocity of the basin material, thereby negating the need to deploy cumbersome arrays.

  14. On-line depth measurement for laser-drilled holes based on the intensity of plasma emission

    NASA Astrophysics Data System (ADS)

    Ho, Chao-Ching; Chiu, Chih-Mu; Chang, Yuan-Jen; Hsu, Jin-Chen; Kuo, Chia-Lung

    2014-09-01

    The direct time-resolved depth measurement of blind holes is extremely difficult due to the short time interval and the limited space inside the hole. This work presents a method that involves on-line plasma emission acquisition and analysis to obtain correlations between the machining processes and the optical signal output. Given that the depths of laser-machined holes can be estimated on-line using a coaxial photodiode, this was employed in our inspection system. Our experiments were conducted in air under normal atmospheric conditions without gas assist. The intensity of radiation emitted from the vaporized material was found to correlate with the depth of the hole. The results indicate that the estimated depths of the laser-drilled holes were inversely proportional to the maximum plasma light emission measured for a given laser pulse number.

  15. Determination of relative ion chamber calibration coefficients from depth-ionization measurements in clinical electron beams

    NASA Astrophysics Data System (ADS)

    Muir, B. R.; McEwen, M. R.; Rogers, D. W. O.

    2014-10-01

    A method is presented to obtain ion chamber calibration coefficients relative to secondary standard reference chambers in electron beams using depth-ionization measurements. Results are obtained as a function of depth and average electron energy at depth in 4, 8, 12 and 18 MeV electron beams from the NRC Elekta Precise linac. The PTW Roos, Scanditronix NACP-02, PTW Advanced Markus and NE 2571 ion chambers are investigated. The challenges and limitations of the method are discussed. The proposed method produces useful data at shallow depths. At depths past the reference depth, small shifts in positioning or drifts in the incident beam energy affect the results, thereby providing a built-in test of incident electron energy drifts and/or chamber set-up. Polarity corrections for ion chambers as a function of average electron energy at depth agree with literature data. The proposed method produces results consistent with those obtained using the conventional calibration procedure while gaining much more information about the behavior of the ion chamber with similar data acquisition time. Measurement uncertainties in calibration coefficients obtained with this method are estimated to be less than 0.5%. These results open up the possibility of using depth-ionization measurements to yield chamber ratios which may be suitable for primary standards-level dissemination.

  16. A new method for indirectly estimating infiltration of paddy fields in situ

    NASA Astrophysics Data System (ADS)

    Xu, Yunqiang; Su, Baolin; Wang, Hongqi; He, Jingyi

    2018-06-01

    Infiltration is one of the major procedures in water balance research and pollution load estimation in paddy fields. In this study, a new method for indirectly estimating infiltration of paddy fields in situ was proposed and implemented in Taihu Lake basin. Since when there were no rainfall, irrigation and artificial drainage, the water depth variation process of a paddy field is only influenced by evapotranspiration and infiltration (E + F). Firstly, (E + F) was estimated by deciding the steady decreasing rate of water depth; then the evapotranspiration (ET) of the paddy field was calculated by using the crop coefficient method with the recommended FAO-56 Penman-Monteith equation; finally, the infiltration of the paddy field was obtained by subtracting ET from (E + F). Results show that the mean infiltration of the studied paddy field during rice jointing-booting period was 7.41 mm day-1, and the mean vertical infiltration and lateral seepage of the paddy field were 5.46 and 1.95 mm day-1 respectively.

  17. Estimating the Depth of Stratigraphic Units from Marine Seismic Profiles Using Nonstationary Geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chihi, Hayet; Galli, Alain; Ravenne, Christian

    2000-03-15

    The object of this study is to build a three-dimensional (3D) geometric model of the stratigraphic units of the margin of the Rhone River on the basis of geophysical investigations by a network of seismic profiles at sea. The geometry of these units is described by depth charts of each surface identified by seismic profiling, which is done by geostatistics. The modeling starts by a statistical analysis by which we determine the parameters that enable us to calculate the variograms of the identified surfaces. After having determined the statistical parameters, we calculate the variograms of the variable Depth. By analyzingmore » the behavior of the variogram we then can deduce whether the situation is stationary and if the variable has an anisotropic behavior. We tried the following two nonstationary methods to obtain our estimates: (a) The method of universal kriging if the underlying variogram was directly accessible. (b) The method of increments if the underlying variogram was not directly accessible. After having modeled the variograms of the increments and of the variable itself, we calculated the surfaces by kriging the variable Depth on a small-mesh estimation grid. The two methods then are compared and their respective advantages and disadvantages are discussed, as well as their fields of application. These methods are capable of being used widely in earth sciences for automatic mapping of geometric surfaces or for variables such as a piezometric surface or a concentration, which are not 'stationary,' that is, essentially, possess a gradient or a tendency to develop systematically in space.« less

  18. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  19. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  20. Real-time estimation of lesion depth and control of radiofrequency ablation within ex vivo animal tissues using a neural network.

    PubMed

    Wang, Yearnchee Curtis; Chan, Terence Chee-Hung; Sahakian, Alan Varteres

    2018-01-04

    Radiofrequency ablation (RFA), a method of inducing thermal ablation (cell death), is often used to destroy tumours or potentially cancerous tissue. Current techniques for RFA estimation (electrical impedance tomography, Nakagami ultrasound, etc.) require long compute times (≥ 2 s) and measurement devices other than the RFA device. This study aims to determine if a neural network (NN) can estimate ablation lesion depth for control of bipolar RFA using complex electrical impedance - since tissue electrical conductivity varies as a function of tissue temperature - in real time using only the RFA therapy device's electrodes. Three-dimensional, cubic models comprised of beef liver, pork loin or pork belly represented target tissue. Temperature and complex electrical impedance from 72 data generation ablations in pork loin and belly were used for training the NN (403 s on Xeon processor). NN inputs were inquiry depth, starting complex impedance and current complex impedance. Training-validation-test splits were 70%-0%-30% and 80%-10%-10% (overfit test). Once the NN-estimated lesion depth for a margin reached the target lesion depth, RFA was stopped for that margin of tissue. The NN trained to 93% accuracy and an NN-integrated control ablated tissue to within 1.0 mm of the target lesion depth on average. Full 15-mm depth maps were calculated in 0.2 s on a single-core ARMv7 processor. The results show that a NN could make lesion depth estimations in real-time using less in situ devices than current techniques. With the NN-based technique, physicians could deliver quicker and more precise ablation therapy.

  1. Assessing the composition of fragmented agglutinated foraminiferal assemblages in ancient sediments: comparison of counting and area-based methods in Famennian samples (Late Devonian)

    NASA Astrophysics Data System (ADS)

    Girard, Catherine; Dufour, Anne-Béatrice; Charruault, Anne-Lise; Renaud, Sabrina

    2018-01-01

    Benthic foraminifera have been used as proxies for various paleoenvironmental variables such as food availability, carbon flux from surface waters, microhabitats, and indirectly water depth. Estimating assemblage composition based on morphotypes, as opposed to genus- or species-level identification, potentially loses important ecological information but opens the way to the study of ancient time periods. However, the ability to accurately constrain benthic foraminiferal assemblages has been questioned when the most abundant foraminifera are fragile agglutinated forms, particularly prone to fragmentation. Here we test an alternate method for accurately estimating the composition of fragmented assemblages. The cumulated area per morphotype method is assessed, i.e., the sum of the area of all tests or fragments of a given morphotype in a sample. The percentage of each morphotype is calculated as a portion of the total cumulated area. Percentages of different morphotypes based on counting and cumulated area methods are compared one by one and analyzed using principal component analyses, a co-inertia analysis, and Shannon diversity indices. Morphotype percentages are further compared to an estimate of water depth based on microfacies description. Percentages of the morphotypes are not related to water depth. In all cases, counting and cumulated area methods deliver highly similar results, suggesting that the less time-consuming traditional counting method may provide robust estimates of assemblages. The size of each morphotype may deliver paleobiological information, for instance regarding biomass, but should be considered carefully due to the pervasive issue of fragmentation.

  2. Estimation of Gravity Parameters Related to Simple Geometrical Structures by Developing an Approach Based on Deconvolution and Linear Optimization Techniques

    NASA Astrophysics Data System (ADS)

    Asfahani, J.; Tlas, M.

    2015-10-01

    An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.

  3. Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults

    NASA Astrophysics Data System (ADS)

    Abdelrahman, E. M.; Essa, K. S.

    2015-02-01

    We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.

  4. Bedrock morphology and structure, upper Santa Cruz Basin, south-central Arizona, with transient electromagnetic survey data

    USGS Publications Warehouse

    Bultman, Mark W.; Page, William R.

    2016-10-31

    The upper Santa Cruz Basin is an important groundwater basin containing the regional aquifer for the city of Nogales, Arizona. This report provides data and interpretations of data aimed at better understanding the bedrock morphology and structure of the upper Santa Cruz Basin study area which encompasses the Rio Rico and Nogales 1:24,000-scale U.S. Geological Survey quadrangles. Data used in this report include the Arizona Aeromagnetic and Gravity Maps and Data referred to here as the 1996 Patagonia Aeromagnetic survey, Bouguer gravity anomaly data, and conductivity-depth transforms (CDTs) from the 1998 Santa Cruz transient electromagnetic survey (whose data are included in appendixes 1 and 2 of this report).Analyses based on magnetic gradients worked well to identify the range-front faults along the Mt. Benedict horst block, the location of possibly fault-controlled canyons to the west of Mt. Benedict, the edges of buried lava flows, and numerous other concealed faults and contacts. Applying the 1996 Patagonia aeromagnetic survey data using the horizontal gradient method produced results that were most closely correlated with the observed geology.The 1996 Patagonia aeromagnetic survey was used to estimate depth to bedrock in the upper Santa Cruz Basin study area. Three different depth estimation methods were applied to the data: Euler deconvolution, horizontal gradient magnitude, and analytic signal. The final depth to bedrock map was produced by choosing the maximum depth from each of the three methods at a given location and combining all maximum depths. In locations of rocks with a known reversed natural remanent magnetic field, gravity based depth estimates from Gettings and Houser (1997) were used.The depth to bedrock map was supported by modeling aeromagnetic anomaly data along six profiles. These cross sectional models demonstrated that by using the depth to bedrock map generated in this study, known and concealed faults, measured and estimated magnetic susceptibilities of rocks found in the study area, and estimated natural remanent magnetic intensities and directions, reasonable geologic models can be built. This indicates that the depth to bedrock map is reason-able and geologically possible.Finally, CDTs derived from the 1998 Santa Cruz Basin transient electromagnetic survey were used to help identify basin structure and some physical properties of the basin fill in the study area. The CDTs also helped to confirm depth to bedrock estimates in the Santa Cruz Basin, in particular a region of elevated bedrock in the area of Potrero Canyon, and a deep basin in the location of the Arizona State Highway 82 microbasin. The CDTs identified many concealed faults in the study area and possibly indicate deep water-saturated clay-rich sediments in the west-central portion of the study area. These sediments grade to more sand-rich saturated sediments to the south with relatively thick, possibly unsaturated, sediments at the surface. Also, the CDTs may indicate deep saturated clay-rich sediments in the Highway 82 microbasin and in the Mount Benedict horst block from Proto Canyon south to the international border.

  5. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.

  6. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  7. Maximum Neutral Buoyancy Depth of Juvenile Chinook Salmon: Implications for Survival during Hydroturbine Passage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.

    This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrallymore » buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.« less

  8. Crust and upper mantle shear wave structure of Northeast Algeria from Rayleigh wave dispersion analysis

    NASA Astrophysics Data System (ADS)

    Radi, Zohir; Yelles-Chaouche, Abdelkrim; Corchete, Victor; Guettouche, Salim

    2017-09-01

    We resolve the crust and upper mantle structure beneath Northeast Algeria at depths of 0-400 km, using inversion of fundamental mode Rayleigh wave. Our data set consists of 490 earthquakes recorded between 2007 and 2014 by five permanent broadband seismic stations in the study area. Applying a combination of different filtering technics and inversion method shear wave velocities structure were determined as functions of depth. The resolved changes in Vs at 50 km depth are in perfect agreement with crustal thickness estimates, which reflect the study area's orogenic setting, partly overlying the collision zone between the African and Eurasian plates. The inferred Moho discontinuity depths are close to those estimated for other convergent areas. In addition, there is good agreement between our results and variations in orientations of regional seismic anisotropy. At depths of 80-180 km, negative Vs anomalies at station CBBR suggest the existence of a failed subduction slab.

  9. Quantifying Seagrass Light Requirements Using an Algorithm to Spatially Resolve Depth of Colonization

    EPA Science Inventory

    The maximum depth of colonization (Zc) is a useful measure of seagrass growth that describes response to light attenuation in the water column. However, lack of standardization among methods for estimating Zc has limited the description of habitat requirements at spatial scales m...

  10. Shape-from-focus by tensor voting.

    PubMed

    Hariharan, R; Rajagopalan, A N

    2012-07-01

    In this correspondence, we address the task of recovering shape-from-focus (SFF) as a perceptual organization problem in 3-D. Using tensor voting, depth hypotheses from different focus operators are validated based on their likelihood to be part of a coherent 3-D surface, thereby exploiting scene geometry and focus information to generate reliable depth estimates. The proposed method is fast and yields significantly better results compared with existing SFF methods.

  11. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information

    PubMed Central

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-01-01

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256

  12. Shear-wave velocity and site-amplification factors for 50 Australian sites determined by the spectral analysis of surface waves method

    USGS Publications Warehouse

    Kayen, Robert E.; Carkin, Bradley A.; Allen, Trevor; Collins, Clive; McPherson, Andrew; Minasian, Diane L.

    2015-01-01

    One-dimensional shear-wave velocity (VS ) profiles are presented at 50 strong motion sites in New South Wales and Victoria, Australia. The VS profiles are estimated with the spectral analysis of surface waves (SASW) method. The SASW method is a noninvasive method that indirectly estimates the VS at depth from variations in the Rayleigh wave phase velocity at the surface.

  13. Receiver function analysis applied to refraction survey data

    NASA Astrophysics Data System (ADS)

    Subaru, T.; Kyosuke, O.; Hitoshi, M.

    2008-12-01

    For the estimation of the thickness of oceanic crust or petrophysical investigation of subsurface material, refraction or reflection seismic exploration is one of the methods frequently practiced. These explorations use four-component (x,y,z component of acceleration and pressure) seismometer, but only compressional wave or vertical component of seismometers tends to be used in the analyses. Hence, it is needed to use shear wave or lateral component of seismograms for more precise investigation to estimate the thickness of oceanic crust. Receiver function is a function at a place that can be used to estimate the depth of velocity interfaces by receiving waves from teleseismic signal including shear wave. Receiver function analysis uses both vertical and horizontal components of seismograms and deconvolves the horizontal with the vertical to estimate the spectral difference of P-S converted waves arriving after the direct P wave. Once the phase information of the receiver function is obtained, then one can estimate the depth of the velocity interface. This analysis has advantage in the estimation of the depth of velocity interface including Mohorovicic discontinuity using two components of seismograms when P-to-S converted waves are generated at the interface. Our study presents results of the preliminary study using synthetic seismograms. First, we use three types of geological models that are composed of a single sediment layer, a crust layer, and a sloped Moho, respectively, for underground sources. The receiver function can estimate the depth and shape of Moho interface precisely for the three models. Second, We applied this method to synthetic refraction survey data generated not by earthquakes but by artificial sources on the ground or sea surface. Compressional seismic waves propagate under the velocity interface and radiate converted shear waves as well as at the other deep underground layer interfaces. However, the receiver function analysis applied to the second model cannot clearly estimate the velocity interface behind S-P converted wave or multi-reflected waves in a sediment layer. One of the causes is that the incidence angles of upcoming waves are too large compared to the underground source model due to the slanted interface. As a result, incident converted shear waves have non-negligible energy contaminating the vertical component of seismometers. Therefore, recorded refraction waves need to be transformed from depth-lateral coordinate into radial-tangential coordinate, and then Ps converted waves can be observed clearly. Finally, we applied the receiver function analysis to a more realistic model. This model has not only similar sloping Mohorovicic discontinuity and surface source locations as second model but the surface water layer. Receivers are aligned on the sea bottom (OBS; Ocean Bottom Seismometer survey case) Due to intricately bounced reflections, simulated seismic section becomes more complex than the other previously-mentioned models. In spite of the complexity in the seismic records, we could pick up the refraction waves from Moho interface, after stacking more than 20 receiver functions independently produced from each shot gather. After these processing, the receiver function analysis is justified as a method to estimate the depths of velocity interfaces and would be the applicable method for refraction wave analysis. The further study will be conducted for more realistic model that contain inhomogeneous sediment model, for example, and finally used in the inversion of the depth of velocity interfaces like Moho.

  14. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  15. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  16. The effect of S-wave arrival times on the accuracy of hypocenter estimation

    USGS Publications Warehouse

    Gomberg, J.S.; Shedlock, K.M.; Roecker, S.W.

    1990-01-01

    We have examined the theoretical basis behind some of the widely accepted "rules of thumb' for obtaining accurate hypocenter estimates that pertain to the use of S phases and illustrate, in a variety of ways, why and when these "rules' are applicable. Most methods used to determine earthquake hypocenters are based on iterative, linearized, least-squares algorithms. We examine the influence of S-phase arrival time data on such algorithms by using the program HYPOINVERSE with synthetic datasets. We conclude that a correctly timed S phase recorded within about 1.4 focal depth's distance from the epicenter can be a powerful constraint on focal depth. Furthermore, we demonstrate that even a single incorrectly timed S phase can result in depth estimates and associated measures of uncertainty that are significantly incorrect. -from Authors

  17. Pose-Invariant Face Recognition via RGB-D Images.

    PubMed

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  18. A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor

    PubMed Central

    Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin

    2018-01-01

    This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173

  19. A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.

    PubMed

    Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin

    2018-04-12

    This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.

  20. Antarctic Sea Ice Thickness and Snow-to-Ice Conversion from Atmospheric Reanalysis and Passive Microwave Snow Depth

    NASA Technical Reports Server (NTRS)

    Markus, Thorsten; Maksym, Ted

    2007-01-01

    Passive microwave snow depth, ice concentration, and ice motion estimates are combined with snowfall from the European Centre for Medium Range Weather Forecasting (ECMWF) reanalysis (ERA-40) from 1979-200 1 to estimate the prevalence of snow-to-ice conversion (snow-ice formation) on level sea ice in the Antarctic for April-October. Snow ice is ubiquitous in all regions throughout the growth season. Calculated snow- ice thicknesses fall within the range of estimates from ice core analysis for most regions. However, uncertainties in both this analysis and in situ data limit the usefulness of snow depth and snow-ice production to evaluate the accuracy of ERA-40 snowfall. The East Antarctic is an exception, where calculated snow-ice production exceeds observed ice thickness over wide areas, suggesting that ERA-40 precipitation is too high there. Snow-ice thickness variability is strongly controlled not just by snow accumulation rates, but also by ice divergence. Surprisingly, snow-ice production is largely independent of snow depth, indicating that the latter may be a poor indicator of total snow accumulation. Using the presence of snow-ice formation as a proxy indicator for near-zero freeboard, we examine the possibility of estimating level ice thickness from satellite snow depths. A best estimate for the mean level ice thickness in September is 53 cm, comparing well with 51 cm from ship-based observations. The error is estimated to be 10-20 cm, which is similar to the observed interannual and regional variability. Nevertheless, this is comparable to expected errors for ice thickness determined by satellite altimeters. Improvement in satellite snow depth retrievals would benefit both of these methods.

  1. Sampling for Soil Carbon Stock Assessment in Rocky Agricultural Soils

    NASA Technical Reports Server (NTRS)

    Beem-Miller, Jeffrey P.; Kong, Angela Y. Y.; Ogle, Stephen; Wolfe, David

    2016-01-01

    Coring methods commonly employed in soil organic C (SOC) stock assessment may not accurately capture soil rock fragment (RF) content or soil bulk density (rho (sub b)) in rocky agricultural soils, potentially biasing SOC stock estimates. Quantitative pits are considered less biased than coring methods but are invasive and often cost-prohibitive. We compared fixed-depth and mass-based estimates of SOC stocks (0.3-meters depth) for hammer, hydraulic push, and rotary coring methods relative to quantitative pits at four agricultural sites ranging in RF content from less than 0.01 to 0.24 cubic meters per cubic meter. Sampling costs were also compared. Coring methods significantly underestimated RF content at all rocky sites, but significant differences (p is less than 0.05) in SOC stocks between pits and corers were only found with the hammer method using the fixed-depth approach at the less than 0.01 cubic meters per cubic meter RF site (pit, 5.80 kilograms C per square meter; hammer, 4.74 kilograms C per square meter) and at the 0.14 cubic meters per cubic meter RF site (pit, 8.81 kilograms C per square meter; hammer, 6.71 kilograms C per square meter). The hammer corer also underestimated rho (sub b) at all sites as did the hydraulic push corer at the 0.21 cubic meters per cubic meter RF site. No significant differences in mass-based SOC stock estimates were observed between pits and corers. Our results indicate that (i) calculating SOC stocks on a mass basis can overcome biases in RF and rho (sub b) estimates introduced by sampling equipment and (ii) a quantitative pit is the optimal sampling method for establishing reference soil masses, followed by rotary and then hydraulic push corers.

  2. In vivo quantitative imaging of point-like bioluminescent and fluorescent sources: Validation studies in phantoms and small animals post mortem

    NASA Astrophysics Data System (ADS)

    Comsa, Daria Craita

    2008-10-01

    There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.

  3. Observing continental boundary-layer structure and evolution over the South African savannah using a ceilometer

    NASA Astrophysics Data System (ADS)

    Gierens, Rosa T.; Henriksson, Svante; Josipovic, Micky; Vakkari, Ville; van Zyl, Pieter G.; Beukes, Johan P.; Wood, Curtis R.; O'Connor, Ewan J.

    2018-05-01

    The atmospheric boundary layer (BL) is the atmospheric layer coupled to the Earth's surface at relatively short timescales. A key quantity is the BL depth, which is important in many applied areas of weather and climate such as air-quality forecasting. Studying BLs in climates and biomes across the globe is important, particularly in the under-sampled southern hemisphere. The present study is based on a grazed grassland-savannah area in northwestern South Africa during October 2012-August 2014. Ceilometers are probably the cheapest method for measuring continuous aerosol profiles up to several kilometers above ground and are thus an ideal tool for long-term studies of BLs. A ceilometer-estimated BL depth is based on profiles of attenuated backscattering coefficients from atmospheric aerosols; the sharpest drop often occurs at BL top. Based on this, we developed a new method for layer detection that we call the signal-limited layer method. The new algorithm was applied to ceilometer profiles which thus classified BL into classic regime types: daytime convective mixing, and a double layer at night of surface-based stable with a residual layer above it. We employed wavelet fitting to increase successful BL estimation for noisy profiles. The layer-detection algorithm was supported by an eddy-flux station, rain gauges, and manual inspection. Diurnal cycles were often clear, with BL depth detected for 50% of the daytime typically being 1-3 km, and for 80% of the night-time typically being a few hundred meters. Variability was also analyzed with respect to seasons and years. Finally, BL depths were compared with ERA-Interim estimates of BL depth to show reassuring agreement.

  4. Improving detection of copy-number variation by simultaneous bias correction and read-depth segmentation.

    PubMed

    Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei

    2013-02-01

    Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.

  5. Estimating water use by sugar maple trees: considerations when using heat-pulse methods in trees with deep functional sapwood.

    PubMed

    Pausch, Roman C.; Grote, Edmund E.; Dawson, Todd E.

    2000-03-01

    Accurate estimates of sapwood properties (including radial depth of functional xylem and wood water content) are critical when using the heat pulse velocity (HPV) technique to estimate tree water use. Errors in estimating the volumetric water content (V(h)) of the sapwood, especially in tree species with a large proportion of sapwood, can cause significant errors in the calculations ofsap velocity and sap flow through tree boles. Scaling to the whole-stand level greatly inflates these errors. We determined the effects of season, tree size and radial wood depth on V(h) of wood cores removed from Acer saccharum Marsh. trees throughout 3 years in upstate New York. We also determined the effects of variation in V(h) on sap velocity and sap flow calculations based on HPV data collected from sap flow gauges inserted at four depths. In addition, we compared two modifications of Hatton's weighted average technique, the zero-step and zero-average methods, for determining sap velocity and sap flow at depths beyond those penetrated by the sap flow gauges. Parameter V(h) varied significantly with time of year (DOY), tree size (S), and radial wood depth (RD), and there were significant DOY x S and DOY x RD interactions. Use of a mean whole-tree V(h) value resulted in differences ranging from -6 to +47% for both sap velocity and sap flow for individual sapwood annuli compared with use of the V(h) value determined at the specific depth where a probe was placed. Whole-tree sap flow was 7% higher when calculated on the basis of the individual V(h) value compared with the mean whole-tree V(h) value. Calculated total sap flow for a tree with a DBH of 48.8 cm was 13 and 19% less using the zero-step and the zero-average velocity techniques, respectively, than the value obtained with Hatton's weighted average technique. Smaller differences among the three methods were observed for a tree with a DBH of 24.4 cm. We conclude that, for Acer saccharum: (1) mean V(h) changes significantly during the year and can range from nearly 50% during winter and early spring, to 20% during the growing season;(2) large trees have a significantly greater V(h) than small trees; (3) overall, V(h) decreases and then increases significantly with radial wood depth, suggesting that radial water movement and storage are highly dynamic; and (4) V(h) estimates can vary greatly and influence subsequent water use calculations depending on whether an average or an individual V(h) value for a wood core is used. For large diameter trees in which sapwood comprises a large fraction of total stem cross-sectional area (where sap flow gauges cannot be inserted across the entire cross-sectional area), the zero-average modification of Hatton's weighted average method reduces the potential for large errors in whole-tree and landscape water balance estimates based on the HPV method.

  6. Detecting SNPs and estimating allele frequencies in clonal bacterial populations by sequencing pooled DNA.

    PubMed

    Holt, Kathryn E; Teo, Yik Y; Li, Heng; Nair, Satheesh; Dougan, Gordon; Wain, John; Parkhill, Julian

    2009-08-15

    Here, we present a method for estimating the frequencies of SNP alleles present within pooled samples of DNA using high-throughput short-read sequencing. The method was tested on real data from six strains of the highly monomorphic pathogen Salmonella Paratyphi A, sequenced individually and in a pool. A variety of read mapping and quality-weighting procedures were tested to determine the optimal parameters, which afforded > or =80% sensitivity of SNP detection and strong correlation with true SNP frequency at poolwide read depth of 40x, declining only slightly at read depths 20-40x. The method was implemented in Perl and relies on the opensource software Maq for read mapping and SNP calling. The Perl script is freely available from ftp://ftp.sanger.ac.uk/pub/pathogens/pools/.

  7. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  8. Two Techniques for Estimating Deglacial Mean-Ocean δ13 C Change from the Same Set of 493 Benthic δ13C Records

    NASA Astrophysics Data System (ADS)

    Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.

    2013-12-01

    The crux of carbon redistribution over the deglaciation centers on the ocean, where the isotopic signature of terrestrial carbon (δ13C terrestrial carbon = -25‰) is observed as a 0.3-0.7‰ shift in benthic foraminiferal δ13C. Deglacial mean-ocean δ13C estimates vary due to different subsets of benthic δ13C data and different methods of weighting the mean δ13C by volume. Here, we present a detailed 1-to-1 comparison of two methods of calculating mean δ13C change and uncertainty estimates using the same set of 493 benthic Cibicidoides spp. δ13C measurements for the LGM and Late Holocene. The first method divides the ocean into 8 regions, and uses simple line fits to describe the distribution of δ13C data for each timeslice over 0.5-5 km depth. With these line fits, we estimate the δ13C value at 100-meter intervals and weight those estimates by the regional volume at each depth slice. The mean-ocean δ13C is the sum of these volume-weighted regional δ13C estimates and the uncertainty of these mean-ocean δ13C estimates is computed using Monte Carlo simulations. The whole-ocean δ13C change is estimated using extrapolated surface- and deep-ocean δ13C estimates, and an assumed δ13C value for the Southern Ocean. This method yields an estimated LGM-to-Holocene change of 0.38×0.07‰ for 0.5-5km and 0.35×0.16‰ for the whole ocean (Peterson et al., 2013, submitted to Paleoceanography). The second method reconstructs glacial and modern δ13C by combining the same data compilation as above with a steady-state ocean circulation model (Gebbie, 2013, submitted to Paleoceanography). The result is a tracer distribution on a 4-by-4 degree horizontal resolution grid with 23 vertical levels, and an estimate of the distribution's uncertainty that accounts for the distinct modern and glacial water-mass geometries. From both methods, we compare the regional δ13C estimates (0.5-5 km), surface δ13C estimates (0-0.5 km), deep δ13C estimates (>5 km), Southern Ocean δ13C estimates, and finally whole-ocean δ13C estimates. Additionally, we explore the sensitivity of our mean δ13C estimates to our region and depth boundaries. Such a detailed comparison broadens our understanding of the limitations of sparse geologic data sets and deepens our understanding of deglacial δ13C changes.

  9. Depth of origin of magma in eruptions.

    PubMed

    Becerril, Laura; Galindo, Ines; Gudmundsson, Agust; Morales, Jose Maria

    2013-09-26

    Many volcanic hazard factors--such as the likelihood and duration of an eruption, the eruption style, and the probability of its triggering large landslides or caldera collapses--relate to the depth of the magma source. Yet, the magma source depths are commonly poorly known, even in frequently erupting volcanoes such as Hekla in Iceland and Etna in Italy. Here we show how the length-thickness ratios of feeder dykes can be used to estimate the depth to the source magma chamber. Using this method, accurately measured volcanic fissures/feeder-dykes in El Hierro (Canary Islands) indicate a source depth of 11-15 km, which coincides with the main cloud of earthquake foci surrounding the magma chamber associated with the 2011-2012 eruption of El Hierro. The method can be used on widely available GPS and InSAR data to calculate the depths to the source magma chambers of active volcanoes worldwide.

  10. Estimation of tool wear compensation during micro-electro-discharge machining of silicon using process simulation

    NASA Astrophysics Data System (ADS)

    Muralidhara, .; Vasa, Nilesh J.; Singaperumal, M.

    2010-02-01

    A micro-electro-discharge machine (Micro EDM) was developed incorporating a piezoactuated direct drive tool feed mechanism for micromachining of Silicon using a copper tool. Tool and workpiece materials are removed during Micro EDM process which demand for a tool wear compensation technique to reach the specified depth of machining on the workpiece. An in-situ axial tool wear and machining depth measurement system is developed to investigate axial wear ratio variations with machining depth. Stepwise micromachining experiments on silicon wafer were performed to investigate the variations in the silicon removal and tool wear depths with increase in tool feed. Based on these experimental data, a tool wear compensation method is proposed to reach the desired depth of micromachining on silicon using copper tool. Micromachining experiments are performed with the proposed tool wear compensation method and a maximum workpiece machining depth variation of 6% was observed.

  11. Depth of origin of magma in eruptions

    PubMed Central

    Becerril, Laura; Galindo, Ines; Gudmundsson, Agust; Morales, Jose Maria

    2013-01-01

    Many volcanic hazard factors - such as the likelihood and duration of an eruption, the eruption style, and the probability of its triggering large landslides or caldera collapses - relate to the depth of the magma source. Yet, the magma source depths are commonly poorly known, even in frequently erupting volcanoes such as Hekla in Iceland and Etna in Italy. Here we show how the length-thickness ratios of feeder dykes can be used to estimate the depth to the source magma chamber. Using this method, accurately measured volcanic fissures/feeder-dykes in El Hierro (Canary Islands) indicate a source depth of 11–15 km, which coincides with the main cloud of earthquake foci surrounding the magma chamber associated with the 2011–2012 eruption of El Hierro. The method can be used on widely available GPS and InSAR data to calculate the depths to the source magma chambers of active volcanoes worldwide. PMID:24067336

  12. Using electrical impedance to predict catheter-endocardial contact during RF cardiac ablation.

    PubMed

    Cao, Hong; Tungjitkusolmun, Supan; Choy, Young Bin; Tsai, Jang-Zern; Vorperian, Vicken R; Webster, John G

    2002-03-01

    During radio-frequency (RF) cardiac catheter ablation, there is little information to estimate the contact between the catheter tip electrode and endocardium because only the metal electrode shows up under fluoroscopy. We present a method that utilizes the electrical impedance between the catheter electrode and the dispersive electrode to predict the catheter tip electrode insertion depth into the endocardium. Since the resistivity of blood differs from the resistivity of the endocardium, the impedance increases as the catheter tip lodges deeper in the endocardium. In vitro measurements yielded the impedance-depth relations at 1, 10, 100, and 500 kHz. We predict the depth by spline curve interpolation using the obtained calibration curve. This impedance method gives reasonably accurate predicted depth. We also evaluated alternative methods, such as impedance difference and impedance ratio.

  13. Depth detection in interactive projection system based on one-shot black-and-white stripe pattern.

    PubMed

    Zhou, Qian; Qiao, Xiaorui; Ni, Kai; Li, Xinghui; Wang, Xiaohao

    2017-03-06

    A novel method enabling estimation of not only the screen surface as the conventional one, but the depth information from two-dimensional coordinates in an interactive projection system was proposed in this research. In this method, a one-shot black-and-white stripe pattern from a projector is projected on a screen plane, where the deformed pattern is captured by a charge-coupled device camera. An algorithm based on object/shadow simultaneous detection is proposed for fulfillment of the correspondence. The depth information of the object is then calculated using the triangulation principle. This technology provides a more direct feeling of virtual interaction in three dimensions without using auxiliary equipment or a special screen as interaction proxies. Simulation and experiments are carried out and the results verified the effectiveness of this method in depth detection.

  14. Japan unified hIgh-resolution relocated catalog for earthquakes (JUICE): Crustal seismicity beneath the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Yano, Tomoko E.; Takeda, Tetsuya; Matsubara, Makoto; Shiomi, Katsuhiko

    2017-04-01

    We have generated a high-resolution catalog called the ;Japan Unified hIgh-resolution relocated Catalog for Earthquakes; (JUICE), which can be used to evaluate the geometry and seismogenic depth of active faults in Japan. We relocated > 1.1 million hypocenters from the NIED Hi-net catalog for events which occurred between January 2001 and December 2012, to a depth of 40 km. We apply a relative hypocenter determination method to the data in each grid square, in which entire Japan is divided into 1257 grid squares to parallelize the relocation procedure. We used a double-difference method, incorporating cross-correlating differential times as well as catalog differential times. This allows us to resolve, in detail, a seismicity distribution for the entire Japanese Islands. We estimated location uncertainty by a statistical resampling method, using Jackknife samples, and show that the uncertainty can be within 0.37 km in the horizontal and 0.85 km in the vertical direction with a 90% confidence interval for areas with good station coverage. Our seismogenic depth estimate agrees with the lower limit of the hypocenter distribution for a recent earthquake on the Kamishiro fault (2014, Mj 6.7), which suggests that the new catalog should be useful for estimating the size of future earthquakes for inland active faults.

  15. Assessing prey fish populations in Lake Michigan: Comparison of simultaneous acoustic-midwater trawling with bottom trawling

    USGS Publications Warehouse

    Fabrizio, Mary C.; Adams, Jean V.; Curtis, Gary L.

    1997-01-01

    The Lake Michigan fish community has been monitored since the 1960s with bottom trawls, and since the late 1980s with acoustics and midwater trawls. These sampling tools are limited to different habitats: bottom trawls sample fish near bottom in areas with smooth substrates, and acoustic methods sample fish throughout the water column above all substrate types. We compared estimates of fish densities and species richness from daytime bottom trawling with those estimated from night-time acoustic and midwater trawling at a range of depths in northeastern Lake Michigan in summer 1995. We examined estimates of total fish density as well as densities of alewife Alosa pseudoharengus (Wilson), bloater Coregonus hoyi (Gill), and rainbow smelt Osmerus mordax (Mitchell) because these three species are the dominant forage of large piscivores in Lake Michigan. In shallow water (18 m), we detected more species but fewer fish (in fish/ha and kg/ha) with bottom trawls than with acoustic-midwater trawling. Large aggregations of rainbow smelt were detected by acoustic-midwater trawling at 18 m and contributed to the differences in total fish density estimates between gears at this depth. Numerical and biomass densitites of bloaters from all depths were significantly higher when based on bottom trawl samples than on acoustic-midwater trawling, and this probably contributed to the observed significant difference between methods for total fish densities (kg/ha) at 55 m. Significantly fewer alewives per ha were estimated from bottom trawling than from acoustics-midwater trawling at 55 m, and in deeper waters, no alewives were taken by bottom trawling. The differences detected between gears resulted from alewife, bloater, and rainbow smelt vertical distributions, which varied with lake depth and time of day. Because Lake Michigan fishes are both demersal and pelagic, a single sampling method cannot be used to completely describe characteristics of the fish community.

  16. Comparison of nine methods to estimate ear-canal stimulus levels

    PubMed Central

    Souza, Natalie N.; Dhar, Sumitrajit; Neely, Stephen T.; Siegel, Jonathan H.

    2014-01-01

    The reliability of nine measures of the stimulus level in the human ear canal was compared by measuring the sensitivity of behavioral hearing thresholds to changes in the depth of insertion of an otoacoustic emission probe. Four measures were the ear-canal pressure, the eardrum pressure estimated from it and the pressure measured in an ear simulator with and without compensation for insertion depth. The remaining five quantities were derived from the ear-canal pressure and the Thévenin-equivalent source characteristics of the probe: Forward pressure, initial forward pressure, the pressure transmitted into the middle ear, eardrum sound pressure estimated by summing the magnitudes of the forward and reverse pressure (integrated pressure) and absorbed power. Two sets of behavioral thresholds were measured in 26 subjects from 0.125 to 20 kHz, with the probe inserted at relatively deep and shallow positions in the ear canal. The greatest dependence on insertion depth was for transmitted pressure and absorbed power. The measures with the least dependence on insertion depth throughout the frequency range (best performance) included the depth-compensated simulator, eardrum, forward, and integrated pressures. Among these, forward pressure is advantageous because it quantifies stimulus phase. PMID:25324079

  17. Seismo-volcano source localization with triaxial broad-band seismic array

    NASA Astrophysics Data System (ADS)

    Inza, L. A.; Mars, J. I.; Métaxian, J. P.; O'Brien, G. S.; Macedo, O.

    2011-10-01

    Seismo-volcano source localization is essential to improve our understanding of eruptive dynamics and of magmatic systems. The lack of clear seismic wave phases prohibits the use of classical location methods. Seismic antennas composed of one-component (1C) seismometers provide a good estimate of the backazimuth of the wavefield. The depth estimation, on the other hand, is difficult or impossible to determine. As in classical seismology, the use of three-component (3C) seismometers is now common in volcano studies. To determine the source location parameters (backazimuth and depth), we extend the 1C seismic antenna approach to 3Cs. This paper discusses a high-resolution location method using a 3C array survey (3C-MUSIC algorithm) with data from two seismic antennas installed on an andesitic volcano in Peru (Ubinas volcano). One of the main scientific questions related to the eruptive process of Ubinas volcano is the relationship between the magmatic explosions and long-period (LP) swarms. After introducing the 3C array theory, we evaluate the robustness of the location method on a full wavefield 3-D synthetic data set generated using a digital elevation model of Ubinas volcano and an homogeneous velocity model. Results show that the backazimuth determined using the 3C array has a smaller error than a 1C array. Only the 3C method allows the recovery of the source depths. Finally, we applied the 3C approach to two seismic events recorded in 2009. Crossing the estimated backazimuth and incidence angles, we find sources located 1000 ± 660 m and 3000 ± 730 m below the bottom of the active crater for the explosion and the LP event, respectively. Therefore, extending 1C arrays to 3C arrays in volcano monitoring allows a more accurate determination of the source epicentre and now an estimate for the depth.

  18. Statistical estimation of the potential possibilities for panoramic hydro-optic laser sensing

    NASA Astrophysics Data System (ADS)

    Shamanaev, Vitalii S.; Lisenko, Andrey A.

    2017-11-01

    For statistical estimation of the potential possibilities of the lidar with matrix photodetector placed on board an aircraft, the nonstationary equation of laser sensing of a complex multicomponent sea water medium is solved by the Monte Carlo method. The lidar return power is estimated for various optical sea water characteristics in the presence of solar background radiation. For clear waters and brightness of external background illumination of 50, 1, and 10-3 W/(m2ṡμmṡsr), the signal/noise ratio (SNR) exceeds 10 to water depths h = 45-50 m. For coastal waters, SNR >= 10 for h = 17-24 m, whereas for turbid sea waters, SNR >= 10 only to depths h = 8-12 m. Results of statistical simulation have shown that the lidar system with optimal parameters can be used for water sensing to depths of 50 m.

  19. Wave Period and Coastal Bathymetry Estimations from Satellite Images

    NASA Astrophysics Data System (ADS)

    Danilo, Celine; Melgani, Farid

    2016-08-01

    We present an approach for wave period and coastal water depth estimation. The approach based on wave observations, is entirely independent of ancillary data and can theoretically be applied to SAR or optical images. In order to demonstrate its feasibility we apply our method to more than 50 Sentinel-1A images of the Hawaiian Islands, well-known for its long waves. Six wave buoys are available to compare our results with in-situ measurements. The results on Sentinel-1A images show that half of the images were unsuitable for applying the method (no swell or wavelength too small to be captured by the SAR). On the other half, 78% of the estimated wave periods are in accordance with buoy measurements. In addition, we present preliminary results of the estimation of the coastal water depth on a Landsat-8 image (with characteristics close to Sentinel-2A). With a squared correlation coefficient of 0.7 for ground truth measurement, this approach reveals promising results for monitoring coastal bathymetry.

  20. Reconstruction of the 3-D Dynamics From Surface Variables in a High-Resolution Simulation of North Atlantic

    NASA Astrophysics Data System (ADS)

    Fresnay, S.; Ponte, A. L.; Le Gentil, S.; Le Sommer, J.

    2018-03-01

    Several methods that reconstruct the three-dimensional ocean dynamics from sea level are presented and evaluated in the Gulf Stream region with a 1/60° realistic numerical simulation. The use of sea level is motivated by its better correlation with interior pressure or quasi-geostrophic potential vorticity (PV) compared to sea surface temperature and sea surface salinity, and, by its observability via satellite altimetry. The simplest method of reconstruction relies on a linear estimation of pressure at depth from sea level. Another method consists in linearly estimating PV from sea level first and then performing a PV inversion. The last method considered, labeled SQG for surface quasi-geostrophy, relies on a PV inversion but assumes no PV anomalies. The first two methods show comparable skill at levels above -800 m. They moderately outperform SQG which emphasizes the difficulty of estimating interior PV from surface variables. Over the 250-1,000 m depth range, the three methods skillfully reconstruct pressure at wavelengths between 500 and 200 km whereas they exhibit a rapid loss of skill between 200 and 100 km wavelengths. Applicability to a real case scenario and leads for improvements are discussed.

  1. Quantifying the contribution of groundwater on water consumption in arid crop land with shallow groundwater

    NASA Astrophysics Data System (ADS)

    Huo, Z.; Liu, Z.; Wang, X.; Qu, Z.

    2016-12-01

    Groundwater from the shallow aquifers can supply substantial water for evapotranspiration of crops. However, it is difficult to quantify to the contribution of groundwater on crop's water consumption. In present study, regional scale evapotranspiration and the groundwater contribution to evapotranspiration were estimated by the soil water balance equation in Hetao irrigation distric with shallow aquifers, China. Estimates used an 8-year (2006-2013) hydrological dataset including soil moisture, the depth to water table, irrigation amounts, rainfall data, and drainage water flow. The 8-year mean evapotranspiration was estimated to be 664 mm. The mean groundwater supported evapotranspiration (ETg) was estimated to be 228 mm, with variation from 145 mm to 412 mm during the crop growth period. Analysis of the positive correlation between evapotranspiration and the sum of irrigation and rainfall, and the analysis of the negative correlation between ETg/ET and the sum of irrigation and rainfall, reflect the need of groundwater to meet the evapotranspiration demand. Approximately 20% to 40% of the evapotranspiration is from the shallow aquifers in the study area. Furthermore, a new method estimating daily ETg during the crop growing season was developed. The effects of crop growth stage, climate condition, groundwater depth and soil moisture are considered in the model. The method was tested with controlled lysimeter experiments of winter wheat including five controlled water table depths and four soil profiles of different textures. The simulated ETg is a good agreement with the measured data for four soil profiles and different depths to groundwater table. These results could be useful for the government to understand the significant role of the groundwater and make reasonable water use policy in the semiarid agricultural regions.

  2. Effect of Binary Source Companions on the Microlensing Optical Depth Determination toward the Galactic Bulge Field

    NASA Astrophysics Data System (ADS)

    Han, Cheongho

    2005-11-01

    Currently, gravitational microlensing survey experiments toward the Galactic bulge field use two different methods of minimizing the blending effect for the accurate determination of the optical depth τ. One is measuring τ based on clump giant (CG) source stars, and the other is using ``difference image analysis'' (DIA) photometry to measure the unblended source flux variation. Despite the expectation that the two estimates should be the same assuming that blending is properly considered, the estimates based on CG stars systematically fall below the DIA results based on all events with source stars down to the detection limit. Prompted by the gap, we investigate the previously unconsidered effect of companion-associated events on τ determination. Although the image of a companion is blended with that of its primary star and thus not resolved, the event associated with the companion can be detected if the companion flux is highly magnified. Therefore, companions work effectively as source stars to microlensing, and thus the neglect of them in the source star count could result in a wrong τ estimation. By carrying out simulations based on the assumption that companions follow the same luminosity function as primary stars, we estimate that the contribution of the companion-associated events to the total event rate is ~5fbi% for current surveys and can reach up to ~6fbi% for future surveys monitoring fainter stars, where fbi is the binary frequency. Therefore, we conclude that the companion-associated events comprise a nonnegligible fraction of all events. However, their contribution to the optical depth is not large enough to explain the systematic difference between the optical depth estimates based on the two different methods.

  3. Improving Depth, Energy and Timing Estimation in PET Detectors with Deconvolution and Maximum Likelihood Pulse Shape Discrimination

    PubMed Central

    Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.

    2016-01-01

    In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658

  4. Improving Depth, Energy and Timing Estimation in PET Detectors with Deconvolution and Maximum Likelihood Pulse Shape Discrimination.

    PubMed

    Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R

    2016-11-01

    In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.

  5. Erosion estimation of guide vane end clearance in hydraulic turbines with sediment water flow

    NASA Astrophysics Data System (ADS)

    Han, Wei; Kang, Jingbo; Wang, Jie; Peng, Guoyi; Li, Lianyuan; Su, Min

    2018-04-01

    The end surface of guide vane or head cover is one of the most serious parts of sediment erosion for high-head hydraulic turbines. In order to investigate the relationship between erosion depth of wall surface and the characteristic parameter of erosion, an estimative method including a simplified flow model and a modificatory erosion calculative function is proposed in this paper. The flow between the end surfaces of guide vane and head cover is simplified as a clearance flow around a circular cylinder with a backward facing step. Erosion characteristic parameter of csws3 is calculated with the mixture model for multiphase flow and the renormalization group (RNG) k-𝜀 turbulence model under the actual working conditions, based on which, erosion depths of guide vane and head cover end surfaces are estimated with a modification of erosion coefficient K. The estimation results agree well with the actual situation. It is shown that the estimative method is reasonable for erosion prediction of guide vane and can provide a significant reference to determine the optimal maintenance cycle for hydraulic turbine in the future.

  6. Markerless Knee Joint Position Measurement Using Depth Data during Stair Walking

    PubMed Central

    Mita, Akira; Yorozu, Ayanori; Takahashi, Masaki

    2017-01-01

    Climbing and descending stairs are demanding daily activities, and the monitoring of them may reveal the presence of musculoskeletal diseases at an early stage. A markerless system is needed to monitor such stair walking activity without mentally or physically disturbing the subject. Microsoft Kinect v2 has been used for gait monitoring, as it provides a markerless skeleton tracking function. However, few studies have used this device for stair walking monitoring, and the accuracy of its skeleton tracking function during stair walking has not been evaluated. Moreover, skeleton tracking is not likely to be suitable for estimating body joints during stair walking, as the form of the body is different from what it is when it walks on level surfaces. In this study, a new method of estimating the 3D position of the knee joint was devised that uses the depth data of Kinect v2. The accuracy of this method was compared with that of the skeleton tracking function of Kinect v2 by simultaneously measuring subjects with a 3D motion capture system. The depth data method was found to be more accurate than skeleton tracking. The mean error of the 3D Euclidian distance of the depth data method was 43.2 ± 27.5 mm, while that of the skeleton tracking was 50.4 ± 23.9 mm. This method indicates the possibility of stair walking monitoring for the early discovery of musculoskeletal diseases. PMID:29165396

  7. How Much Water is in That Snowpack? Improving Basin-wide Snow Water Equivalent Estimates from the Airborne Snow Observatory

    NASA Astrophysics Data System (ADS)

    Bormann, K.; Painter, T. H.; Marks, D. G.; Kirchner, P. B.; Winstral, A. H.; Ramirez, P.; Goodale, C. E.; Richardson, M.; Berisford, D. F.

    2014-12-01

    In the western US, snowmelt from the mountains contribute the vast majority of fresh water supply, in an otherwise dry region. With much of California currently experiencing extreme drought, it is critical for water managers to have accurate basin-wide estimations of snow water content during the spring melt season. At the forefront of basin-scale snow monitoring is the Jet Propulsion Laboratory's Airborne Snow Observatory (ASO). With combined LiDAR /spectrometer instruments and weekly flights over key basins throughout California, the ASO suite is capable of retrieving high-resolution basin-wide snow depth and albedo observations. To make best use of these high-resolution snow depths, spatially distributed snow density data are required to leverage snow water equivalent (SWE) from the measured depths. Snow density is a spatially and temporally variable property and is difficult to estimate at basin scales. Currently, ASO uses a physically based snow model (iSnobal) to resolve distributed snow density dynamics across the basin. However, there are issues with the density algorithms in iSnobal, particularly with snow depths below 0.50 m. This shortcoming limited the use of snow density fields from iSnobal during the poor snowfall year of 2014 in the Sierra Nevada, where snow depths were generally low. A deeper understanding of iSnobal model performance and uncertainty for snow density estimation is required. In this study, the model is compared to an existing climate-based statistical method for basin-wide snow density estimation in the Tuolumne basin in the Sierra Nevada and sparse field density measurements. The objective of this study is to improve the water resource information provided to water managers during ASO operation in the future by reducing the uncertainty introduced during the snow depth to SWE conversion.

  8. Flaw depth sizing using guided waves

    NASA Astrophysics Data System (ADS)

    Cobb, Adam C.; Fisher, Jay L.

    2016-02-01

    Guided wave inspection technology is most often applied as a survey tool for pipeline inspection, where relatively low frequency ultrasonic waves, compared to those used in conventional ultrasonic nondestructive evaluation (NDE) methods, propagate along the structure; discontinuities cause a reflection of the sound back to the sensor for flaw detection. Although the technology can be used to accurately locate a flaw over long distances, the flaw sizing performance, especially for flaw depth estimation, is much poorer than other, local NDE approaches. Estimating flaw depth, as opposed to other parameters, is of particular interest for failure analysis of many structures. At present, most guided wave technologies estimate the size of the flaw based on the reflected signal amplitude from the flaw compared to a known geometry reflection, such as a circumferential weld in a pipeline. This process, however, requires many assumptions to be made, such as weld geometry and flaw shape. Furthermore, it is highly dependent on the amplitude of the flaw reflection, which can vary based on many factors, such as attenuation and sensor installation. To improve sizing performance, especially depth estimation, and do so in a way that is not strictly amplitude dependent, this paper describes an approach to estimate the depth of a flaw based on a multimodal analysis. This approach eliminates the need of using geometric reflections for calibration and can be used for both pipeline and plate inspection applications. To verify the approach, a test set was manufactured on plate specimens with flaws of different widths and depths ranging from 5% to 100% of total wall thickness; 90% of these flaws were sized to within 15% of their true value. A description of the initial multimodal sizing strategy and results will be discussed.

  9. Independent evaluation of the SNODAS snow depth product using regional scale LiDAR-derived measurements

    NASA Astrophysics Data System (ADS)

    Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.

    2014-06-01

    Repeated Light Detection and Ranging (LiDAR) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 LiDAR-derived dataset of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically-based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the coterminous United States. Independent validation data is scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation dataset with substantial geographic coverage. Within twelve distinctive 500 m × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 LiDAR acquisitions. This supplied a dataset for constraining the uncertainty of upscaled LiDAR estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled LiDAR snow depths were then compared to the SNODAS-estimates over the entire study area for the dates of the LiDAR flights. The remotely-sensed snow depths provided a more spatially continuous comparison dataset and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between LiDAR observations and SNODAS estimates were most drastic, suggesting natural processes specific to these regions as causal influences on model uncertainty.

  10. Independent evaluation of the SNODAS snow depth product using regional-scale lidar-derived measurements

    NASA Astrophysics Data System (ADS)

    Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.

    2015-01-01

    Repeated light detection and ranging (lidar) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 lidar-derived data set of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the conterminous United States. Independent validation data are scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation data set with substantial geographic coverage. Within 12 distinctive 500 × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 lidar acquisitions. This supplied a data set for constraining the uncertainty of upscaled lidar estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled lidar snow depths were then compared to the SNODAS estimates over the entire study area for the dates of the lidar flights. The remotely sensed snow depths provided a more spatially continuous comparison data set and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between lidar observations and SNODAS estimates were most drastic, providing insight into the causal influences of natural processes on model uncertainty.

  11. Estimating lake-water evaporation from data of large-aperture scintillometer in the Badain Jaran Desert, China, with two comparable methods

    NASA Astrophysics Data System (ADS)

    Han, Peng-Fei; Wang, Xu-Sheng; Jin, Xiaomei; Hu, Bill X.

    2018-06-01

    Accurate quantification of evaporation (E0) from open water is vital in arid regions for water resource management and planning, especially for lakes in the desert. The scintillometers are increasingly recognized by researchers for their ability to determine sensible (H) and latent heat fluxes (LE) accurately over distances of hundreds of meters to several kilometers, though scintillometers are mainly used to monitor the land surface processes. In this paper, it is installed on both sides of the shore over a lake. Compared to the data of evaporationpan, the scintillometer was successfully applied to Sumu Barun Jaran in Badain Jaran Desert using the classical method and the proposed linearized β method. Due to the difficulty in measuring water surface temperature and the easiness to monitor the water temperature at different depths, it is worth thinking that if is feasible to utilize the shallow water temperature instead of the water surface temperature and how much errors it will cause. Water temperature at 10 and 20 cm depths were used to replace the lakewater surface temperature in the two methods to analyze the changes of sensible and latent heat fluxes in hot and cold seasons at halfhour time scales. Based on the classical method, the values of H were almost barely affected, and the average value of LE using water temperature at 20 cm depth is 0.8-9.5 % smaller than that at 10 cm depth in cold seasons. In hot seasons, compared to the results at 10 cm depth, the average value of H increased by 20-30 %, and LE decreased by about 20 % at 20 cm depth. In the proposed linearized β method of scintillometer, only the slope of the saturation pressure curve (Δ) is related to the water surface temperature, which was estimated using available equations of saturated vapor pressure versus temperature of the air. Compared to the values of estimated by the air temperature, while the water surface temperature are replaced by water temperature at 10 and 20 cm depths, in different seasons, the errors of 2-25 % in Δ were caused. Thus was calculated by the original equation in the proposed linearized β method of scintillometer. Interestingly, the water temperature at 10 and 20 cm depths had little effect on H, LE (E0) in different seasons. The reason is that the drying power of the air (EA) accounted for about 85 % of the evaporation (i.e. the changes of Δ have only about 3 % impact on evaporation), which indicated that the driving force from unsaturated to saturated vapor pressure at 2 m height (i.e. the aerodynamic portion) has the main role on evaporation. Therefore, the proposed linearized β method of scintillometer is recommended to quantify the H, LE (E0) over open water, especially when the water surface temperature cannot be accurately measured.

  12. The maximum economic depth of groundwater abstraction for irrigation

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.

  13. A probabilistic storm transposition approach for estimating exceedance probabilities of extreme precipitation depths

    NASA Astrophysics Data System (ADS)

    Foufoula-Georgiou, E.

    1989-05-01

    A storm transposition approach is investigated as a possible tool of assessing the frequency of extreme precipitation depths, that is, depths of return period much greater than 100 years. This paper focuses on estimation of the annual exceedance probability of extreme average precipitation depths over a catchment. The probabilistic storm transposition methodology is presented, and the several conceptual and methodological difficulties arising in this approach are identified. The method is implemented and is partially evaluated by means of a semihypothetical example involving extreme midwestern storms and two hypothetical catchments (of 100 and 1000 mi2 (˜260 and 2600 km2)) located in central Iowa. The results point out the need for further research to fully explore the potential of this approach as a tool for assessing the probabilities of rare storms, and eventually floods, a necessary element of risk-based analysis and design of large hydraulic structures.

  14. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  15. On the performance of surface renewal analysis to estimate sensible heat flux over two growing rice fields under the influence of regional advection

    NASA Astrophysics Data System (ADS)

    Castellví, F.; Snyder, R. L.

    2009-09-01

    SummaryHigh-frequency temperature data were recorded at one height and they were used in Surface Renewal (SR) analysis to estimate sensible heat flux during the full growing season of two rice fields located north-northeast of Colusa, CA (in the Sacramento Valley). One of the fields was seeded into a flooded paddy and the other was drill seeded before flooding. To minimize fetch requirements, the measurement height was selected to be close to the maximum expected canopy height. The roughness sub-layer depth was estimated to discriminate if the temperature data came from the inertial or roughness sub-layer. The equation to estimate the roughness sub-layer depth was derived by combining simple mixing-length theory, mixing-layer analogy, equations to account for stable atmospheric surface layer conditions, and semi-empirical canopy-architecture relationships. The potential for SR analysis as a method that operates in the full surface boundary layer was tested using data collected over growing vegetation at a site influenced by regional advection of sensible heat flux. The inputs used to estimate the sensible heat fluxes included air temperature sampled at 10 Hz, the mean and variance of the horizontal wind speed, the canopy height, and the plant area index for a given intermediate height of the canopy. Regardless of the stability conditions and measurement height above the canopy, sensible heat flux estimates using SR analysis gave results that were similar to those measured with the eddy covariance method. Under unstable cases, it was shown that the performance was sensitive to estimation of the roughness sub-layer depth. However, an expression was provided to select the crucial scale required for its estimation.

  16. Comparison of soil thickness in a zero-order basin in the Oregon Coast Range using a soil probe and electrical resistivity tomography

    USGS Publications Warehouse

    Morse, Michael S.; Lu, Ning; Godt, Jonathan W.; Revil, André; Coe, Jeffrey A.

    2012-01-01

    Accurate estimation of the soil thickness distribution in steepland drainage basins is essential for understanding ecosystem and subsurface response to infiltration. One important aspect of this characterization is assessing the heavy and antecedent rainfall conditions that lead to shallow landsliding. In this paper, we investigate the direct current (DC) resistivity method as a tool for quickly estimating soil thickness over a steep (33–40°) zero-order basin in the Oregon Coast Range, a landslide prone region. Point measurements throughout the basin showed bedrock depths between 0.55 and 3.2 m. Resistivity of soil and bedrock samples collected from the site was measured for degrees of saturation between 40 and 92%. Resistivity of the soil was typically higher than that of the bedrock for degrees of saturation lower than 70%. Results from the laboratory measurements and point-depth measurements were used in a numerical model to evaluate the resistivity contrast at the soil-bedrock interface. A decreasing-with-depth resistivity contrast was apparent at the interface in the modeling results. At the field site, three transects were surveyed where coincident ground truth measurements of bedrock depth were available, to test the accuracy of the method. The same decreasing-with-depth resistivity trend that was apparent in the model was also present in the survey data. The resistivity contour of between 1,000 and 2,000 Ωm that marked the top of the contrast was our interpreted bedrock depth in the survey data. Kriged depth-to-bedrock maps were created from both the field-measured ground truth obtained with a soil probe and interpreted depths from the resistivity tomography, and these were compared for accuracy graphically. Depths were interpolated as far as 16.5 m laterally from the resistivity survey lines with root mean squared error (RMSE) = 27 cm between the measured and interpreted depth at those locations. Using several transects and analysis of the subsurface material properties, the direct current (DC) resistivity method is shown to be able to delineate bedrock depth trends within the drainage basin.

  17. Regional ground-water evapotranspiration and ground-water budgets, Great Basin, Nevada

    USGS Publications Warehouse

    Nichols, William D.

    2000-01-01

    PART A: Ground-water evapotranspiration data from five sites in Nevada and seven sites in Owens Valley, California, were used to develop equations for estimating ground-water evapotranspiration as a function of phreatophyte plant cover or as a function of the depth to ground water. Equations are given for estimating mean daily seasonal and annual ground-water evapotranspiration. The equations that estimate ground-water evapotranspiration as a function of plant cover can be used to estimate regional-scale ground-water evapotranspiration using vegetation indices derived from satellite data for areas where the depth to ground water is poorly known. Equations that estimate ground-water evapotranspiration as a function of the depth to ground water can be used where the depth to ground water is known, but for which information on plant cover is lacking. PART B: Previous ground-water studies estimated groundwater evapotranspiration by phreatophytes and bare soil in Nevada on the basis of results of field studies published in 1912 and 1932. More recent studies of evapotranspiration by rangeland phreatophytes, using micrometeorological methods as discussed in Chapter A of this report, provide new data on which to base estimates of ground-water evapotranspiration. An approach correlating ground-water evapotranspiration with plant cover is used in conjunction with a modified soil-adjusted vegetation index derived from Landsat data to develop a method for estimating the magnitude and distribution of ground-water evapotranspiration at a regional scale. Large areas of phreatophytes near Duckwater and Lockes in Railroad Valley are believed to subsist on ground water discharged from nearby regional springs. Ground-water evapotranspiration by the Duckwater phreatophytes of about 11,500 acre-feet estimated by the method described in this report compares well with measured discharge of about 13,500 acre-feet from the springs near Duckwater. Measured discharge from springs near Lockes was about 2,400 acre-feet; estimated ground-water evapotranspiration using the proposed method was about 2,450 acre-feet. PART C: Previous estimates of ground-water budgets in Nevada were based on methods and data that now are more than 60 years old. Newer methods, data, and technologies were used in the present study to estimate ground-water recharge from precipitation and ground-water discharge by evapotranspiration by phreatophytes for 16 contiguous valleys in eastern Nevada. Annual ground-water recharge to these valleys was estimated to be about 855,000 acre-feet and annual ground-water evapotranspiration was estimated to be about 790,000 acrefeet; both are a little more than two times greater than previous estimates. The imbalance of recharge over evapotranspiration represents recharge that either (1) leaves the area as interbasin flow or (2) is derived from precipitation that falls on terrain within the topographic boundary of the study area but contributes to discharge from hydrologic systems that lie outside these topographic limits. A vegetation index derived from Landsat-satellite data was used to estimate phreatophyte plant cover on the floors of the 16 valleys. The estimated phreatophyte plant cover then was used to estimate annual ground-water evapotranspiration. Detailed estimates of summer, winter, and annual ground-water evapotranspiration for areas with different ranges of phreatophyte plant cover were prepared for each valley. The estimated ground-water discharge from 15 valleys, combined with independent estimates of interbasin ground-water flow into or from a valley, were used to calculate the percentage of recharge derived from precipitation within the topographic boundary of each valley. These percentages then were used to estimate ground-water recharge from precipitation within each valley. Ground-water budgets for all 16 valleys were based on the estimated recharge from precipitation and estimated evapotranspiration. Any imba

  18. Quantifying Contaminant Mass for the Feasibility Study of the DuPont Chambers Works FUSRAP Site - 13510

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Carl; Rahman, Mahmudur; Johnson, Ann

    2013-07-01

    The U.S. Army Corps of Engineers (USACE) - Philadelphia District is conducting an environmental restoration at the DuPont Chambers Works in Deepwater, New Jersey under the Formerly Utilized Sites Remedial Action Program (FUSRAP). Discrete locations are contaminated with natural uranium, thorium-230 and radium-226. The USACE is proposing a preferred remedial alternative consisting of excavation and offsite disposal to address soil contamination followed by monitored natural attenuation to address residual groundwater contamination. Methods were developed to quantify the error associated with contaminant volume estimates and use mass balance calculations of the uranium plume to estimate the removal efficiency of the proposedmore » alternative. During the remedial investigation, the USACE collected approximately 500 soil samples at various depths. As the first step of contaminant mass estimation, soil analytical data was segmented into several depth intervals. Second, using contouring software, analytical data for each depth interval was contoured to determine lateral extent of contamination. Six different contouring algorithms were used to generate alternative interpretations of the lateral extent of the soil contamination. Finally, geographical information system software was used to produce a three dimensional model in order to present both lateral and vertical extent of the soil contamination and to estimate the volume of impacted soil for each depth interval. The average soil volume from all six contouring methods was used to determine the estimated volume of impacted soil. This method also allowed an estimate of a standard deviation of the waste volume estimate. It was determined that the margin of error for the method was plus or minus 17% of the waste volume, which is within the acceptable construction contingency for cost estimation. USACE collected approximately 190 groundwater samples from 40 monitor wells. It is expected that excavation and disposal of contaminated soil will remove the contaminant source zone and significantly reduce contaminant concentrations in groundwater. To test this assumption, a mass balance evaluation was performed to estimate the amount of dissolved uranium that would remain in the groundwater after completion of soil excavation. As part of this evaluation, average groundwater concentrations for the pre-excavation and post-excavation aquifer plume area were calculated to determine the percentage of plume removed during excavation activities. In addition, the volume of the plume removed during excavation dewatering was estimated. The results of the evaluation show that approximately 98% of the aqueous uranium would be removed during the excavation phase. The USACE expects that residual levels of contamination will remain in groundwater after excavation of soil but at levels well suited for the selection of excavation combined with monitored natural attenuation as a preferred alternative. (authors)« less

  19. Investigation of Atmospheric Effects on Retrieval of Sun-Induced Fluorescence Using Hyperspectral Imagery.

    PubMed

    Ni, Zhuoya; Liu, Zhigang; Li, Zhao-Liang; Nerry, Françoise; Huo, Hongyuan; Sun, Rui; Yang, Peiqi; Zhang, Weiwei

    2016-04-06

    Significant research progress has recently been made in estimating fluorescence in the oxygen absorption bands, however, quantitative retrieval of fluorescence data is still affected by factors such as atmospheric effects. In this paper, top-of-atmosphere (TOA) radiance is generated by the MODTRAN 4 and SCOPE models. Based on simulated data, sensitivity analysis is conducted to assess the sensitivities of four indicators-depth_absorption_band, depth_nofs-depth_withfs, radiance and Fs/radiance-to atmospheric parameters (sun zenith angle (SZA), sensor height, elevation, visibility (VIS) and water content) in the oxygen absorption bands. The results indicate that the SZA and sensor height are the most sensitive parameters and that variations in these two parameters result in large variations calculated as the variation value/the base value in the oxygen absorption depth in the O₂-A and O₂-B bands (111.4% and 77.1% in the O₂-A band; and 27.5% and 32.6% in the O₂-B band, respectively). A comparison of fluorescence retrieval using three methods (Damm method, Braun method and DOAS) and SCOPE Fs indicates that the Damm method yields good results and that atmospheric correction can improve the accuracy of fluorescence retrieval. Damm method is the improved 3FLD method but considering atmospheric effects. Finally, hyperspectral airborne images combined with other parameters (SZA, VIS and water content) are exploited to estimate fluorescence using the Damm method and 3FLD method. The retrieval fluorescence is compared with the field measured fluorescence, yielding good results (R² = 0.91 for Damm vs. SCOPE SIF; R² = 0.65 for 3FLD vs. SCOPE SIF). Five types of vegetation, including ailanthus, elm, mountain peach, willow and Chinese ash, exhibit consistent associations between the retrieved fluorescence and field measured fluorescence.

  20. Investigation of Atmospheric Effects on Retrieval of Sun-Induced Fluorescence Using Hyperspectral Imagery

    PubMed Central

    Ni, Zhuoya; Liu, Zhigang; Li, Zhao-Liang; Nerry, Françoise; Huo, Hongyuan; Sun, Rui; Yang, Peiqi; Zhang, Weiwei

    2016-01-01

    Significant research progress has recently been made in estimating fluorescence in the oxygen absorption bands, however, quantitative retrieval of fluorescence data is still affected by factors such as atmospheric effects. In this paper, top-of-atmosphere (TOA) radiance is generated by the MODTRAN 4 and SCOPE models. Based on simulated data, sensitivity analysis is conducted to assess the sensitivities of four indicators—depth_absorption_band, depth_nofs-depth_withfs, radiance and Fs/radiance—to atmospheric parameters (sun zenith angle (SZA), sensor height, elevation, visibility (VIS) and water content) in the oxygen absorption bands. The results indicate that the SZA and sensor height are the most sensitive parameters and that variations in these two parameters result in large variations calculated as the variation value/the base value in the oxygen absorption depth in the O2-A and O2-B bands (111.4% and 77.1% in the O2-A band; and 27.5% and 32.6% in the O2-B band, respectively). A comparison of fluorescence retrieval using three methods (Damm method, Braun method and DOAS) and SCOPE Fs indicates that the Damm method yields good results and that atmospheric correction can improve the accuracy of fluorescence retrieval. Damm method is the improved 3FLD method but considering atmospheric effects. Finally, hyperspectral airborne images combined with other parameters (SZA, VIS and water content) are exploited to estimate fluorescence using the Damm method and 3FLD method. The retrieval fluorescence is compared with the field measured fluorescence, yielding good results (R2 = 0.91 for Damm vs. SCOPE SIF; R2 = 0.65 for 3FLD vs. SCOPE SIF). Five types of vegetation, including ailanthus, elm, mountain peach, willow and Chinese ash, exhibit consistent associations between the retrieved fluorescence and field measured fluorescence. PMID:27058542

  1. Blind shear-wave velocity comparison of ReMi and MASW results with boreholes to 200 m in Santa Clara Valley: Implications for earthquake ground-motion assessment

    USGS Publications Warehouse

    Stephenson, W.J.; Louie, J.N.; Pullammanappallil, S.; Williams, R.A.; Odum, J.K.

    2005-01-01

    Multichannel analysis of surface waves (MASW) and refraction microtremor (ReMi) are two of the most recently developed surface acquisition techniques for determining shallow shear-wave velocity. We conducted a blind comparison of MASW and ReMi results with four boreholes logged to at least 260 m for shear velocity in Santa Clara Valley, California, to determine how closely these surface methods match the downhole measurements. Average shear-wave velocity estimates to depths of 30, 50, and 100 m demonstrate that the surface methods as implemented in this study can generally match borehole results to within 15% to these depths. At two of the boreholes, the average to 100 m depth was within 3%. Spectral amplifications predicted from the respective borehole velocity profiles similarly compare to within 15 % or better from 1 to 10 Hz with both the MASW and ReMi surface-method velocity profiles. Overall, neither surface method was consistently better at matching the borehole velocity profiles or amplifications. Our results suggest MASW and ReMi surface acquisition methods can both be appropriate choices for estimating shearwave velocity and can be complementary to each other in urban settings for hazards assessment.

  2. Parameter uncertainty and nonstationarity in regional extreme rainfall frequency analysis in Qu River Basin, East China

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Xu, Y. P.; Gu, H.

    2014-12-01

    Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management and risk management.

  3. Shallow seismic source parameter determination using intermediate-period surface wave amplitude spectra

    NASA Astrophysics Data System (ADS)

    Fox, Benjamin D.; Selby, Neil D.; Heyburn, Ross; Woodhouse, John H.

    2012-09-01

    Estimating reliable depths for shallow seismic sources is important in both seismo-tectonic studies and in seismic discrimination studies. Surface wave excitation is sensitive to source depth, especially at intermediate and short-periods, owing to the approximate exponential decay of surface wave displacements with depth. A new method is presented here to retrieve earthquake source parameters from regional and teleseismic intermediate period (100-15 s) fundamental-mode surface wave recordings. This method makes use of advances in mapping global dispersion to allow higher frequency surface wave recordings at regional and teleseismic distances to be used with more confidence than in previous studies and hence improve the resolution of depth estimates. Synthetic amplitude spectra are generated using surface wave theory combined with a great circle path approximation, and a grid of double-couple sources are compared with the data. Source parameters producing the best-fitting amplitude spectra are identified by minimizing the least-squares misfit in logarithmic amplitude space. The F-test is used to search the solution space for statistically acceptable parameters and the ranges of these variables are used to place constraints on the best-fitting source. Estimates of focal mechanism, depth and scalar seismic moment are determined for 20 small to moderate sized (4.3 ≤Mw≤ 6.4) earthquakes. These earthquakes are situated across a wide range of geographic and tectonic locations and describe a range of faulting styles over the depth range 4-29 km. For the larger earthquakes, comparisons with other studies are favourable, however existing source determination procedures, such as the CMT technique, cannot be performed for the smaller events. By reducing the magnitude threshold at which robust source parameters can be determined, the accuracy, especially at shallow depths, of seismo-tectonic studies, seismic hazard assessments, and seismic discrimination investigations can be improved by the application of this methodology.

  4. Catchment-scale snow depth monitoring with balloon photogrammetry

    NASA Astrophysics Data System (ADS)

    Durand, M. T.; Li, D.; Wigmore, O.; Vanderjagt, B. J.; Molotch, N. P.; Bales, R. C.

    2016-12-01

    Field campaigns and permanent in-situ facilities provide extensive measurements of snowpack properties at catchment (or smaller) scales, and have consistently improved our understanding of snow processes and the estimation of snow water resources. However, snow depth, one of the most important snow states, has been measured almost entirely with discrete point-scale samplings in field measurements; spatiotemporally continuous snow depth measurements are nearly nonexistent, mainly due to the high cost of airborne flights and the ban of Unmanned Aerial Systems in many areas (e.g. in all the national parks). In this study, we estimate spatially continuous snow depth from photogrammetric reconstruction of aerial photos taken from a weather balloon. The study was conducted in a 0.2 km2 watershed in Wolverton, Sequoia National Park, California. We tied a point-and-shoot camera on a helium-inflated weather balloon to take aerial images; the camera was scripted to automatically capture images every 3 seconds and to record the camera position and orientation at the imaging times using a built-in GPS. With the 2D images of the snow-covered ground and the camera position and orientation data, the 3D coordinates of the snow surface were reconstructed at 10 cm resolution using photogrammetry software PhotoScan. Similar measurements were taken for the snow-free ground after snowmelt, and the snow depth was estimated from the difference between the snow-on and snow-off measurements. Comparing the photogrammetric-estimated snow depths with the 32 manually measured depths, taken at the same time as the snow-on balloon flight, we find the RMSE of the photogrammetric snow depth is 7 cm, which is 2% of the long-term peak snow depth in the study area. This study suggests that the balloon photogrammetry is a repeatable, economical, simple, and environmental-friendly method to continuously monitor snow at small-scales. Spatiotemporally continuous snow depth could be regularly measured in future field measurements to supplement traditional snow property observations. In addition, since the process of collecting and processing balloon photogrammetry data is straightforward, the photogrammetric snow depth could be shared with the public in real time using our cloud platform that is currently under development.

  5. An empirical method to estimate shear wave velocity of soils in the New Madrid seismic zone

    USGS Publications Warehouse

    Wei, B.-Z.; Pezeshk, S.; Chang, T.-S.; Hall, K.H.; Liu, Huaibao P.

    1996-01-01

    In this study, a set of charts are developed to estimate shear wave velocity of soils in the New Madrid seismic zone (NMSZ), using the standard penetration test (SPT) N values and soil depths. Laboratory dynamic test results of soil samples collected from the NMSZ showed that the shear wave velocity of soils is related to the void ratio and the effective confining pressure applied to the soils. The void ratio of soils can be estimated from the SPT N values and the effective confining pressure depends on the depth of soils. Therefore, the shear wave velocity of soils can be estimated from the SPT N value and the soil depth. To make the methodology practical, two corrections should be made. One is that field SPT N values of soils must be adjusted to an unified SPT N??? value to account the effects of overburden pressure and equipment. The second is that the effect of water table to effective overburden pressure of soils must be considered. To verify the methodology, shear wave velocities of five sites in the NMSZ are estimated and compared with those obtained from field measurements. The comparison shows that our approach and the field tests are consistent with an error of less than of 15%. Thus, the method developed in this study is useful for dynamic study and practical designs in the NMSZ region. Copyright ?? 1996 Elsevier Science Limited.

  6. Tamarack and black spruce adventitious root patterns are similar in their ability to estimate organic layer depths in northern temperate forests

    Treesearch

    Timothy J. Veverica; Evan S. Kane; Eric S. Kasischke

    2012-01-01

    Organic layer consumption during forest fires is hard to quantify. These data suggest that the adventitious root methods developed for reconstructing organic layer depths following wildfires in boreal black spruce forests can also be applied to mixed tamarack forests growing in temperate regions with glacially transported soils.

  7. Evaluation of SNODAS snow depth and snow water equivalent estimates for the Colorado Rocky Mountains, USA

    USGS Publications Warehouse

    Clow, David W.; Nanus, Leora; Verdin, Kristine L.; Schmidt, Jeffrey

    2012-01-01

    The National Weather Service's Snow Data Assimilation (SNODAS) program provides daily, gridded estimates of snow depth, snow water equivalent (SWE), and related snow parameters at a 1-km2 resolution for the conterminous USA. In this study, SNODAS snow depth and SWE estimates were compared with independent, ground-based snow survey data in the Colorado Rocky Mountains to assess SNODAS accuracy at the 1-km2 scale. Accuracy also was evaluated at the basin scale by comparing SNODAS model output to snowmelt runoff in 31 headwater basins with US Geological Survey stream gauges. Results from the snow surveys indicated that SNODAS performed well in forested areas, explaining 72% of the variance in snow depths and 77% of the variance in SWE. However, SNODAS showed poor agreement with measurements in alpine areas, explaining 16% of the variance in snow depth and 30% of the variance in SWE. At the basin scale, snowmelt runoff was moderately correlated (R2 = 0.52) with SNODAS model estimates. A simple method for adjusting SNODAS SWE estimates in alpine areas was developed that uses relations between prevailing wind direction, terrain, and vegetation to account for wind redistribution of snow in alpine terrain. The adjustments substantially improved agreement between measurements and SNODAS estimates, with the R2 of measured SWE values against SNODAS SWE estimates increasing from 0.42 to 0.63 and the root mean square error decreasing from 12 to 6 cm. Results from this study indicate that SNODAS can provide reliable data for input to moderate-scale to large-scale hydrologic models, which are essential for creating accurate runoff forecasts. Refinement of SNODAS SWE estimates for alpine areas to account for wind redistribution of snow could further improve model performance. Published 2011. This article is a US Government work and is in the public domain in the USA.

  8. Image Restoration for Fluorescence Planar Imaging with Diffusion Model

    PubMed Central

    Gong, Yuzhu; Li, Yang

    2017-01-01

    Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843

  9. An entropy-based method for determining the flow depth distribution in natural channels

    NASA Astrophysics Data System (ADS)

    Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.

    2013-08-01

    A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.

  10. Effect of depth on shear-wave elastography estimated in the internal and external cervical os during pregnancy

    PubMed Central

    Hernandez-Andrade, Edgar; Aurioles-Garibay, Alma; Garcia, Maynor; Korzeniewski, Steven J.; Schwartz, Alyse G.; Ahn, Hyunyoung; Martinez-Varea, Alicia; Yeo, Lami; Chaiworapongsa, Tinnakorn; Hassan, Sonia S.; Romero, Roberto

    2014-01-01

    Aim To investigate the effect of depth on cervical shear-wave elastography. Methods Shear-wave elastography was applied to estimate the velocity of propagation of the acoustic force impulse (shear-wave) in the cervix of 154 pregnant women at 11-36 weeks of gestation. Shear-wave speed (SWS) was evaluated in cross-sectional views of the internal and external cervical os in five regions of interest: anterior, posterior, lateral right, lateral left, and endocervix. Distance from the center of the US transducer to the center of the each region of interest was registered. Results In all regions, SWS decreased significantly with gestational age (p=0.006). In the internal os SWS was similar among the anterior, posterior and lateral regions, and lower in the endocervix. In the external os, the endocervix and anterior regions showed similar SWS values, lower than those from the posterior and lateral regions. In the endocervix, these differences remained significant after adjustment for depth, gestational age and cervical length. SWS estimations in all regions of the internal os were higher than those of the external os, suggesting denser tissue. Conclusion Depth from the ultrasound probe to different regions in the cervix did not significantly affect the SWS estimations. PMID:25029081

  11. Estimating the composition of hydrates from a 3D seismic dataset near Penghu Canyon on Chinese passive margin offshore Taiwan

    NASA Astrophysics Data System (ADS)

    Chi, Wu-Cheng

    2016-04-01

    A bottom-simulating reflector (BSR), representing the base of the gas hydrate stability zone, can be used to estimate geothermal gradients under seafloor. However, to derive temperature estimates at the BSR, the correct hydrate composition is needed to calculate the phase boundary. Here we applied the method by Minshull and Keddie to constrain the hydrate composition and the pore fluid salinity. We used a 3D seismic dataset offshore SW Taiwan to test the method. Different from previous studies, we have considered the effects of 3D topographic effects using finite element modelling and also depth-dependent thermal conductivity. Using a pore water salinity of 2% at the BSR depth as found from the nearby core samples, we successfully used 99% methane and 1% ethane gas hydrate phase boundary to derive a sub-bottom depth vs. temperature plot which is consistent with the seafloor temperature from in-situ measurements. The results are also consistent with geochemical analyses of the pore fluids. The derived regional geothermal gradient is 40.1oC/km, which is similar to 40oC/km used in the 3D finite element modelling used in this study. This study is among the first documented successful use of Minshull and Keddie's method to constrain seafloor gas hydrate composition.

  12. An Improved Method for Seismic Event Depth and Moment Tensor Determination: CTBT Related Application

    NASA Astrophysics Data System (ADS)

    Stachnik, J.; Rozhkov, M.; Baker, B.

    2016-12-01

    According to the Protocol to CTBT, International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event. Determination of seismic event source mechanism and its depth is a part of these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. We show preliminary results using the latter approach from an improved software design and applied on a moderately powered computer. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK 2009, 2013 and 2016 events and shallow earthquakes using a new implementation of waveform fitting of teleseismic P waves. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Moment tensors for DPRK events show isotropic percentages greater than 50%. Depth estimates for the DPRK events range from 1.0-1.4 km. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.

  13. Estimation of the depth of faulting in the northeast margin of Argyre basin (Mars) by structural analysis of lobate scarps

    NASA Astrophysics Data System (ADS)

    Herrero-Gil, Andrea; Ruiz, Javier; Egea-González, Isabel; Romeo, Ignacio

    2017-04-01

    Lobate scarps are tectonic structures considered as the topographic expression of thrust faults. For this study we have chosen three large lobate scarps (Ogygis Rupes, Bosporos Rupes and a third unnamed one) located in Aonia Terra, in the southern hemisphere of Mars near the northeast margin of the Argyre impact basin. These lobate scarps strike parallel to the edge of Thaumasia in this area, showing a roughly arcuate to linear form and an asymmetric cross section with a steeply frontal scarp and a gently dipping back scarp. The asymmetry in the cross sections suggests that the three lobate scarps were generated by ESE-vergent thrust faults. Two complementary methods were used to analyze the faults underlying these lobate scarps based on Mars Orbiter Laser Altimeter data and the Mars imagery available: (i) analyzing topographic profiles together with the horizontal shortening estimations from cross-cut craters to create balanced cross sections on the basis of thrust fault propagation folding [1]; (ii) using a forward mechanical dislocation method [2], which predicts fault geometry by comparing model outputs with real topography. The objective is to obtain fault geometry parameters as the minimum value for the horizontal offset, dip angle and depth of faulting of each underlying fault. By comparing the results obtained by both methods we estimate a preliminary depth of faulting value between 15 and 26 kilometers for this zone between Thaumasia and Argyre basin. The significant sizes of the faults underlying these three lobate scarps suggest that their detachments are located at a main rheological change. Estimates of the depth of faulting in similar lobate scarps on Mars or Mercury [3] have been associated to the depth of the brittle-ductile transition. [1] Suppe (1983), Am. J. Sci., 283, 648-721; Seeber and Sorlien (2000), Geol. Soc. Am. Bull., 112, 1067-1079. [2] Toda et al. (1998) JGR, 103, 24543-24565. [3] i.e. Schultz and Watters (2001) Geophys. Res. Lett., 28, 4659-4662; Ruiz et al. (2008) EPSL, 270, 1-12; Egea-Gonzalez et al. (2012) PSS, 60, 193-198; Mueller et al. (2014) EPSL, 408, 100-109.

  14. KERNELHR: A program for estimating animal home ranges

    USGS Publications Warehouse

    Seaman, D.E.; Griffith, B.; Powell, R.A.

    1998-01-01

    Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.

  15. Comparison of the accuracy of kriging and IDW interpolations in estimating groundwater arsenic concentrations in Texas.

    PubMed

    Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E

    2014-04-01

    Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (p<0.0001). Correlation coefficient was significantly lower with cokriging than any other methods (p<0.006) for wells in Texas, east Texas or the Edwards aquifer. Correlation coefficient was significantly greater for wells in southwestern Texas Panhandle than in east Texas, and was higher for wells in Ogallala aquifer than in Edwards aquifer (p<0.0001) regardless of interpolation methods. In regression analysis, the best models are when well depth and/or elevation were entered into the model as covariates regardless of area/aquifer or interpolation methods, and models with IDW are better than kriging in any area/aquifer. In conclusion, the accuracy in estimating groundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular. Published by Elsevier Inc.

  16. Estimation of the displacements among distant events based on parallel tracking of events in seismic traces under uncertainty

    NASA Astrophysics Data System (ADS)

    Huamán Bustamante, Samuel G.; Cavalcanti Pacheco, Marco A.; Lazo Lazo, Juan G.

    2018-07-01

    The method we propose in this paper seeks to estimate interface displacements among strata related with reflection seismic events, in comparison to the interfaces at other reference points. To do so, we search for reflection events in the reference point of a second seismic trace taken from the same 3D survey and close to a well. However, the nature of the seismic data introduces uncertainty in the results. Therefore, we perform an uncertainty analysis using the standard deviation results from several experiments with cross-correlation of signals. To estimate the displacements of events in depth between two seismic traces, we create a synthetic seismic trace with an empirical wavelet and the sonic log of the well, close to the second seismic trace. Then, we relate the events of the seismic traces to the depth of the sonic log. Finally, we test the method with data from the Namorado Field in Brazil. The results show that the accuracy of the event estimated depth depends on the results of parallel cross-correlation, primarily those from the procedures used in the integration of seismic data with data from the well. The proposed approach can correctly identify several similar events in two seismic traces without requiring all seismic traces between two distant points of interest to correlate strata in the subsurface.

  17. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  18. A novel shape from focus method based on 3D steerable filters for improved performance on treating textureless region

    NASA Astrophysics Data System (ADS)

    Fan, Tiantian; Yu, Hongbin

    2018-03-01

    A novel shape from focus method combining 3D steerable filter for improved performance on treating textureless region was proposed in this paper. Different from conventional spatial methods focusing on the search of maximum edges' response to estimate the depth map, the currently proposed method took both of the edges' response and the axial imaging blur degree into consideration during treatment. As a result, more robust and accurate identification for the focused location can be achieved, especially when treating textureless objects. Improved performance in depth measurement has been successfully demonstrated from both of the simulation and experiment results.

  19. Pulse shape discrimination and classification methods for continuous depth of interaction encoding PET detectors.

    PubMed

    Roncali, Emilie; Phipps, Jennifer E; Marcu, Laura; Cherry, Simon R

    2012-10-21

    In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2×2×20 mm(3) phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors.

  20. Pulse Shape Discrimination and Classification Methods for Continuous Depth of Interaction Encoding PET Detectors

    PubMed Central

    Roncali, Emilie; Phipps, Jennifer E.; Marcu, Laura; Cherry, Simon R.

    2012-01-01

    In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2 × 2 × 20 mm3 phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors. PMID:23010690

  1. Using the ratio of the magnetic field to the analytic signal of the magnetic gradient tensor in determining the position of simple shaped magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Karimi, Kurosh; Shirzaditabar, Farzad

    2017-08-01

    The analytic signal of magnitude of the magnetic field’s components and its first derivatives have been employed for locating magnetic structures, which can be considered as point-dipoles or line of dipoles. Although similar methods have been used for locating such magnetic anomalies, they cannot estimate the positions of anomalies in noisy states with an acceptable accuracy. The methods are also inexact in determining the depth of deep anomalies. In noisy cases and in places other than poles, the maximum points of the magnitude of the magnetic vector components and Az are not located exactly above 3D bodies. Consequently, the horizontal location estimates of bodies are accompanied by errors. Here, the previous methods are altered and generalized to locate deeper models in the presence of noise even at lower magnetic latitudes. In addition, a statistical technique is presented for working in noisy areas and a new method, which is resistant to noise by using a ‘depths mean’ method, is made. Reduction to the pole transformation is also used to find the most possible actual horizontal body location. Deep models are also well estimated. The method is tested on real magnetic data over an urban gas pipeline in the vicinity of Kermanshah province, Iran. The estimated location of the pipeline is accurate in accordance with the result of the half-width method.

  2. Comparison of Soil Quality Index Using Three Methods

    PubMed Central

    Mukherjee, Atanu; Lal, Rattan

    2014-01-01

    Assessment of management-induced changes in soil quality is important to sustaining high crop yield. A large diversity of cultivated soils necessitate identification development of an appropriate soil quality index (SQI) based on relative soil properties and crop yield. Whereas numerous attempts have been made to estimate SQI for major soils across the World, there is no standard method established and thus, a strong need exists for developing a user-friendly and credible SQI through comparison of various available methods. Therefore, the objective of this article is to compare three widely used methods to estimate SQI using the data collected from 72 soil samples from three on-farm study sites in Ohio. Additionally, challenge lies in establishing a correlation between crop yield versus SQI calculated either depth wise or in combination of soil layers as standard methodology is not yet available and was not given much attention to date. Predominant soils of the study included one organic (Mc), and two mineral (CrB, Ko) soils. Three methods used to estimate SQI were: (i) simple additive SQI (SQI-1), (ii) weighted additive SQI (SQI-2), and (iii) statistically modeled SQI (SQI-3) based on principal component analysis (PCA). The SQI varied between treatments and soil types and ranged between 0–0.9 (1 being the maximum SQI). In general, SQIs did not significantly differ at depths under any method suggesting that soil quality did not significantly differ for different depths at the studied sites. Additionally, data indicate that SQI-3 was most strongly correlated with crop yield, the correlation coefficient ranged between 0.74–0.78. All three SQIs were significantly correlated (r = 0.92–0.97) to each other and with crop yield (r = 0.65–0.79). Separate analyses by crop variety revealed that correlation was low indicating that some key aspects of soil quality related to crop response are important requirements for estimating SQI. PMID:25148036

  3. Accuracy of snow depth estimation in mountain and prairie environments by an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Harder, Phillip; Schirmer, Michael; Pomeroy, John; Helgason, Warren

    2016-11-01

    Quantifying the spatial distribution of snow is crucial to predict and assess its water resource potential and understand land-atmosphere interactions. High-resolution remote sensing of snow depth has been limited to terrestrial and airborne laser scanning and more recently with application of structure from motion (SfM) techniques to airborne (manned and unmanned) imagery. In this study, photography from a small unmanned aerial vehicle (UAV) was used to generate digital surface models (DSMs) and orthomosaics for snow cover at a cultivated agricultural Canadian prairie and a sparsely vegetated Rocky Mountain alpine ridgetop site using SfM. The accuracy and repeatability of this method to quantify snow depth, changes in depth and its spatial variability was assessed for different terrain types over time. Root mean square errors in snow depth estimation from differencing snow-covered and non-snow-covered DSMs were 8.8 cm for a short prairie grain stubble surface, 13.7 cm for a tall prairie grain stubble surface and 8.5 cm for an alpine mountain surface. This technique provided useful information on maximum snow accumulation and snow-covered area depletion at all sites, while temporal changes in snow depth could also be quantified at the alpine site due to the deeper snowpack and consequent higher signal-to-noise ratio. The application of SfM to UAV photographs returns meaningful information in areas with mean snow depth > 30 cm, but the direct observation of snow depth depletion of shallow snowpacks with this method is not feasible. Accuracy varied with surface characteristics, sunlight and wind speed during the flight, with the most consistent performance found for wind speeds < 10 m s-1, clear skies, high sun angles and surfaces with negligible vegetation cover.

  4. Phased-array vector velocity estimation using transverse oscillations.

    PubMed

    Pihl, Michael J; Marcher, Jonne; Jensen, Jorgen A

    2012-12-01

    A method for estimating the 2-D vector velocity of blood using a phased-array transducer is presented. The approach is based on the transverse oscillation (TO) method. The purposes of this work are to expand the TO method to a phased-array geometry and to broaden the potential clinical applicability of the method. A phased-array transducer has a smaller footprint and a larger field of view than a linear array, and is therefore more suited for, e.g., cardiac imaging. The method relies on suitable TO fields, and a beamforming strategy employing diverging TO beams is proposed. The implementation of the TO method using a phased-array transducer for vector velocity estimation is evaluated through simulation and flow-rig measurements are acquired using an experimental scanner. The vast number of calculations needed to perform flow simulations makes the optimization of the TO fields a cumbersome process. Therefore, three performance metrics are proposed. They are calculated based on the complex TO spectrum of the combined TO fields. It is hypothesized that the performance metrics are related to the performance of the velocity estimates. The simulations show that the squared correlation values range from 0.79 to 0.92, indicating a correlation between the performance metrics of the TO spectrum and the velocity estimates. Because these performance metrics are much more readily computed, the TO fields can be optimized faster for improved velocity estimation of both simulations and measurements. For simulations of a parabolic flow at a depth of 10 cm, a relative (to the peak velocity) bias and standard deviation of 4% and 8%, respectively, are obtained. Overall, the simulations show that the TO method implemented on a phased-array transducer is robust with relative standard deviations around 10% in most cases. The flow-rig measurements show similar results. At a depth of 9.5 cm using 32 emissions per estimate, the relative standard deviation is 9% and the relative bias is -9%. At the center of the vessel, the velocity magnitude is estimated to be 0.25 ± 0.023 m/s, compared with an expected peak velocity magnitude of 0.25 m/s, and the beam-to-flow angle is calculated to be 89.3° ± 0.77°, compared with an expected angle value between 89° and 90°. For steering angles up to ±20° degrees, the relative standard deviation is less than 20%. The results also show that a 64-element transducer implementation is feasible, but with a poorer performance compared with a 128-element transducer. The simulation and experimental results demonstrate that the TO method is suitable for use in conjunction with a phased-array transducer, and that 2-D vector velocity estimation is possible down to a depth of 15 cm.

  5. Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.

  6. Underwater image enhancement through depth estimation based on random forest

    NASA Astrophysics Data System (ADS)

    Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han

    2017-11-01

    Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.

  7. Estimating needle tip deflection in biological tissue from a single transverse ultrasound image: application to brachytherapy.

    PubMed

    Rossa, Carlos; Sloboda, Ron; Usmani, Nawaid; Tavakoli, Mahdi

    2016-07-01

    This paper proposes a method to predict the deflection of a flexible needle inserted into soft tissue based on the observation of deflection at a single point along the needle shaft. We model the needle-tissue as a discretized structure composed of several virtual, weightless, rigid links connected by virtual helical springs whose stiffness coefficient is found using a pattern search algorithm that only requires the force applied at the needle tip during insertion and the needle deflection measured at an arbitrary insertion depth. Needle tip deflections can then be predicted for different insertion depths. Verification of the proposed method in synthetic and biological tissue shows a deflection estimation error of [Formula: see text]2 mm for images acquired at 35 % or more of the maximum insertion depth, and decreases to 1 mm for images acquired closer to the final insertion depth. We also demonstrate the utility of the model for prostate brachytherapy, where in vivo needle deflection measurements obtained during early stages of insertion are used to predict the needle deflection further along the insertion process. The method can predict needle deflection based on the observation of deflection at a single point. The ultrasound probe can be maintained at the same position during insertion of the needle, which avoids complications of tissue deformation caused by the motion of the ultrasound probe.

  8. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    PubMed Central

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-01-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830

  9. Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes

    PubMed Central

    Sampaio, Renato Coral; Vargas, José A. R.

    2018-01-01

    The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments. PMID:29570698

  10. Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes.

    PubMed

    Bestard, Guillermo Alvarez; Sampaio, Renato Coral; Vargas, José A R; Alfaro, Sadek C Absi

    2018-03-23

    The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments.

  11. Estimates of inorganic nitrogen wet deposition from precipitation for the conterminous United States, 1955-84

    USGS Publications Warehouse

    Gronberg, Jo Ann M.; Ludtke, Amy S.; Knifong, Donna L.

    2014-01-01

    The U.S. Geological Survey’s National Water-Quality Assessment program requires nutrient input information for analysis of national and regional assessment of water quality. Historical data are needed to lengthen the data record for assessment of trends in water quality. This report provides estimates of inorganic nitrogen deposition from precipitation for the conterminous United States for 1955–56, 1961–65, and 1981–84. The estimates were derived from ammonium, nitrate, and inorganic nitrogen concentrations in atmospheric wet deposition and precipitation-depth data. This report documents the sources of these data and the methods that were used to estimate the inorganic nitrogen deposition. Tabular datasets, including the analytical results, precipitation depth, and calculated site-specific precipitation-weighted concentrations, and raster datasets of nitrogen from wet deposition are provided as appendixes in this report.

  12. Estimation of effective soil hydraulic properties at field scale via ground albedo neutron sensing

    NASA Astrophysics Data System (ADS)

    Rivera Villarreyes, C. A.; Baroni, G.; Oswald, S. E.

    2012-04-01

    Upscaling of soil hydraulic parameters is a big challenge in hydrological research, especially in model applications of water and solute transport processes. In this contest, numerous attempts have been made to optimize soil hydraulic properties using observations of state variables such as soil moisture. However, in most of the cases the observations are limited at the point-scale and then transferred to the model scale. In this way inherent small-scale soil heterogeneities and non-linearity of dominate processes introduce sources of error that can produce significant misinterpretation of hydrological scenarios and unrealistic predictions. On the other hand, remote-sensed soil moisture over large areas is also a new promising approach to derive effective soil hydraulic properties over its observation footprint, but it is still limited to the soil surface. In this study we present a new methodology to derive soil moisture at the intermediate scale between point-scale observations and estimations at the remote-sensed scale. The data are then used for the estimation of effective soil hydraulic parameters. In particular, ground albedo neutron sensing (GANS) was used to derive non-invasive soil water content in a footprint of ca. 600 m diameter and a depth of few decimeters. This approach is based on the crucial role of hydrogen compared to other landscape materials as neutron moderator. As natural neutron measured aboveground depends on soil water content, the vertical footprint of the GANS method, i.e. its penetration depth, does also. Firstly, this study was designed to evaluate the dynamics of GANS vertical footprint and derive a mathematical model for its prediction. To test GANS-soil moisture and its penetration depth, it was accompanied by other soil moisture measurements (FDR) located at 5, 20 and 40 cm depths over the GANS horizontal footprint in a sunflower field (Brandenburg, Germany). Secondly, a HYDRUS-1D model was set up with monitored values of crop height and meteorological variables as input during a four-month period. Parameter estimation (PEST) software was coupled to HYDRUS-1D in order to calibrate soil hydraulic properties based on soil water content data. Thirdly, effective soil hydraulic properties were derived from GANS-soil moisture. Our observations show the potential of GANS to compensate the lack of information at the intermediate scale, soil water content estimation and effective soil properties. Despite measurement volumes, GANS-derived soil water content compared quantitatively to FDRs at several depths. For one-hour estimations, root mean square error was estimated as 0.019, 0.029 and 0.036 m3/m3 for 5 cm, 20 cm and 40 cm depths, respectively. In the context of soil hydraulic properties, this first application of GANS method succeed and its estimations were comparable to those derived by other approaches.

  13. Estimating Fuel Bed Loadings in Masticated Areas

    Treesearch

    Sharon Hood; Ros Wu

    2006-01-01

    Masticated fuel treatments that chop small trees, shrubs, and dead woody material into smaller pieces to reduce fuel bed depth are used increasingly as a mechanical means to treat fuels. Fuel loading information is important to monitor changes in fuels. The commonly used planar intercept method however, may not correctly estimate fuel loadings because masticated fuels...

  14. Fuel Load (FL)

    Treesearch

    Duncan C. Lutes; Robert E. Keane

    2006-01-01

    The Fuel Load method (FL) is used to sample dead and down woody debris, determine depth of the duff/ litter profile, estimate the proportion of litter in the profile, and estimate total vegetative cover and dead vegetative cover. Down woody debris (DWD) is sampled using the planar intercept technique based on the methodology developed by Brown (1974). Pieces of dead...

  15. Issues to Be Considered in Counting Burrows as a Measure of Atlantic Ghost Crab Populations, an Important Bioindicator of Sandy Beaches

    PubMed Central

    Pombo, Maíra; Turra, Alexander

    2013-01-01

    The use of indirect estimates of ghost-crab populations to assess beach disturbance has several advantages, including non-destructiveness, ease and low cost, although this strategy may add some degree of noise to estimates of population parameters. Resolution of these shortcomings may allow wider use of these populations as an indicator of differences in quality among beaches. This study analyzed to what extent the number of crab burrows may diverge from the number of animals, considering beach morphology, burrow depth and signs of occupation as contributing factors or indicators of a higher or lower occupation rate. We estimated the occupation rate of crabs in burrows on nine low-use beaches, which were previously categorized as dissipative, intermediate or reflexive. Three random 2-m-wide transects were laid perpendicular to the shoreline, where burrows were counted and excavated to search for crabs. The depth and signs of recent activity around the burrows were also recorded. The occupation rate differed on the different beaches, but morphodynamics was not identified as a grouping factor. A considerable number of burrows that lacked signs of recent activity proved to be occupied, and the proportions of these burrows also differed among beaches. Virtually all burrows less than 10 cm deep were unoccupied; the occupation rate tended to increase gradually to a burrow depth of 20–35 cm. Other methods (water, smoke, and traps) were applied to measure the effectiveness of excavating as a method for burrow counts. Traps and excavation proved to be the best methods. These observations illustrate the possible degree of unreliability of comparisons of beaches based on indirect measures. Combining burrow depth assessment with surrounding signs of occupation proved to be a useful tool to minimize biases. PMID:24376748

  16. Methodology and Estimates of Scour at Selected Bridge Sites in Alaska

    USGS Publications Warehouse

    Heinrichs, Thomas A.; Kennedy, Ben W.; Langley, Dustin E.; Burrows, Robert L.

    2001-01-01

    The U.S. Geological Survey estimated scour depths at 325 bridges in Alaska as part of a cooperative agreement with the Alaska Department of Transportation and Public Facilities. The department selected these sites from approximately 806 State-owned bridges as potentially susceptible to scour during extreme floods. Pier scour and contraction scour were computed for the selected bridges by using methods recommended by the Federal Highway Administration. The U.S. Geological Survey used a four-step procedure to estimate scour: (1) Compute magnitudes of the 100- and 500-year floods. (2) Determine cross-section geometry and hydraulic properties for each bridge site. (3) Compute the water-surface profile for the 100- and 500-year floods. (4) Compute contraction and pier scour. This procedure is unique because the cross sections were developed from existing data on file to make a quantitative estimate of scour. This screening method has the advantage of providing scour depths and bed elevations for comparison with bridge-foundation elevations without the time and expense of a field survey. Four examples of bridge-scour analyses are summarized in the appendix.

  17. Application of Microtremor Array Analysis to Estimate the Bedrock Depth in the Beijing Plain area

    NASA Astrophysics Data System (ADS)

    Xu, P.; Ling, S.; Liu, J.; Su, W.

    2013-12-01

    With the rapid expansion of large cities around the world, urban geological survey provides key information regarding resource development and urban construction. Among the major cities of the world, China's capital city Beijing is among the largest cities possessing complex geological structures. The urban geological survey and study in Beijing involves the following aspects: (1) estimating the thickness of the Cenozoic deposit; (2) mapping the three-dimensional structure of the underlying bedrock, as well as its relations to faults and tectonic settings; and (3) assessing the capacity of the city's geological resources in order to support its urban development and operation safety. The geological study of Beijing in general was also intended to provide basic data regarding the urban development and appraisal of engineering and environment geological conditions, as well as underground space resources. In this work, we utilized the microtremor exploration method to estimate the thickness of the bedrock depth, in order to delineate the geological interfaces and improve the accuracy of the bedrock depth map. The microtremor observation sites were located in the Beijing Plain area. Traditional geophysical or geological survey methods were not effective in these areas due to the heavy traffic and dense buildings in the highly-populated urban area. The microtremor exploration method is a Rayleigh-wave inversion technique which extracts its phase velocity dispersion curve from the vertical component of the microtremor array records using the spatial autocorrelation (SPAC) method, then inverts the shear-wave velocity structure. A triple-circular array was adopted for acquiring microtremor data, with the observation radius in ranging from 40 to 300 m, properly adjusted depending on the geological conditions (depth of the bedrock). The collected microtremor data are used to: (1) estimation of phase velocities of Rayleigh-wave from the vertical components of the microtremor records using the SPAC method, and (2) inversion to establish the S-wave velocity structure. Our inversion results show a thick Cenozoic sedimentation in the Fengtai Sag. The bedrock depth is 1510 m at C04-1 and 1575 m at D04-1. In contrast, the Cenozoic sediments are only 193 m thick at E12-1 and 236 m thick at E12-3, indicating very thin Cenozoic sedimentation in the Laiguangying High structural unit. The bedrock at the Houshayu Sag with a depth of 691 m at E16-1 and 875 m at F16-1, respectively, seems to fall somewhere in the middle. The difference between the bedrock depth at the Fengtai Sag and that at the Laiguangying High is as high as 1300 m. This was interpreted as a resulting of a slip along the Taiyanggong fault. On the other hand, the Nankou-Sunhe faulting resulted in a bedrock depth difference of approximately 500 m between the Laiguangying High and Houshayu Sag to the northeast. These results of the bedrock surface depth and its difference in various tectonic units in the Beijing plain area outlined by this article are consistent with both the existing geological data and previous interpretations. The information is deemed very useful for understanding the geological structures, regional tectonics and practical geotechnical problems involved in civil geological engineering in and around Beijing City.

  18. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  19. Seasonal and developmental differences in blubber stores of beluga whales in Bristol Bay, Alaska using high-resolution ultrasound

    PubMed Central

    Cornick, Leslie A.; Quakenbush, Lori T.; Norman, Stephanie A.; Pasi, Coral; Maslyk, Pamela; Burek, Kathy A.; Goertz, Caroline E. C.; Hobbs, Roderick C.

    2016-01-01

    Abstract Diving mammals use blubber for a variety of structural and physiological functions, including buoyancy, streamlining, thermoregulation, and energy storage. Estimating blubber stores provides proxies for body condition, nutritional status, and health. Blubber stores may vary topographically within individuals, across seasons, and with age, sex, and reproductive status; therefore, a single full-depth blubber biopsy does not provide an accurate measure of blubber depth, and additional biopsies are limited because they result in open wounds. We examined high-resolution ultrasound as a noninvasive method for assessing blubber stores by sampling blubber depth at 11 locations on beluga whales in Alaska. Blubber mass was estimated as a proportion of body mass (40% from the literature) and compared to a function of volume calculated using ultrasound blubber depth measurements in a truncated cone. Blubber volume was converted to total and mass-specific blubber mass estimates based on the density of beluga blubber. There was no significant difference in mean total blubber mass between the 2 estimates (R2 = 0.88); however, body mass alone predicted only 68% of the variation in mass-specific blubber stores in juveniles, 7% for adults in the fall, and 33% for adults in the spring. Mass-specific blubber stores calculated from ultrasound measurements were highly variable. Adults had significantly greater blubber stores in the fall (0.48±0.02kg/kgMB) than in the spring (0.33±0.02kg/kgMB). There was no seasonal effect in juveniles. High-resolution ultrasound is a more powerful, noninvasive method for assessing blubber stores in wild belugas, allowing for precise measurements at multiple locations. PMID:29899579

  20. Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.

    2018-04-01

    RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.

  1. Trackline and point detection probabilities for acoustic surveys of Cuvier's and Blainville's beaked whales.

    PubMed

    Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha

    2013-09-01

    Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.

  2. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  3. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    PubMed

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  4. Method to estimate the effective temperatures of late-type giants using line-depth ratios in the wavelength range 0.97-1.32 μm

    NASA Astrophysics Data System (ADS)

    Taniguchi, Daisuke; Matsunaga, Noriyuki; Kobayashi, Naoto; Fukue, Kei; Hamano, Satoshi; Ikeda, Yuji; Kawakita, Hideyo; Kondo, Sohei; Sameshima, Hiroaki; Yasui, Chikako

    2018-02-01

    The effective temperature, one of the most fundamental atmospheric parameters of a star, can be estimated using various methods; here, we focus on a method using line-depth ratios (LDRs). This method combines low- and high-excitation lines and makes use of relations between LDRs of these line pairs and the effective temperature. It has an advantage, for example, of being minimally affected by interstellar reddening, which changes stellar colours. We report 81 relations between LDRs and effective temperature established with high-resolution, λ/Δλ ∼ 28 000, spectra of nine G- to M-type giants in the Y and J bands. Our analysis gives the first comprehensive set of LDR relations for this wavelength range. The combination of all these relations can be used to determine the effective temperatures of stars that have 3700 < Teff < 5400 K and -0.5 < [Fe/H] < +0.3 dex, to a precision of ±10 K in the best cases.

  5. Estimation of In Situ Stresses with Hydro-Fracturing Tests and a Statistical Method

    NASA Astrophysics Data System (ADS)

    Lee, Hikweon; Ong, See Hong

    2018-03-01

    At great depths, where borehole-based field stress measurements such as hydraulic fracturing are challenging due to difficult downhole conditions or prohibitive costs, in situ stresses can be indirectly estimated using wellbore failures such as borehole breakouts and/or drilling-induced tensile failures detected by an image log. As part of such efforts, a statistical method has been developed in which borehole breakouts detected on an image log are used for this purpose (Song et al. in Proceedings on the 7th international symposium on in situ rock stress, 2016; Song and Chang in J Geophys Res Solid Earth 122:4033-4052, 2017). The method employs a grid-searching algorithm in which the least and maximum horizontal principal stresses ( S h and S H) are varied, and the corresponding simulated depth-related breakout width distribution as a function of the breakout angle ( θ B = 90° - half of breakout width) is compared to that observed along the borehole to determine a set of S h and S H having the lowest misfit between them. An important advantage of the method is that S h and S H can be estimated simultaneously in vertical wells. To validate the statistical approach, the method is applied to a vertical hole where a set of field hydraulic fracturing tests have been carried out. The stress estimations using the proposed method were found to be in good agreement with the results interpreted from the hydraulic fracturing test measurements.

  6. Depth to Curie temperature across the central Red Sea from magnetic data using the de-fractal method

    NASA Astrophysics Data System (ADS)

    Salem, Ahmed; Green, Chris; Ravat, Dhananjay; Singh, Kumar Hemant; East, Paul; Fairhead, J. Derek; Mogren, Saad; Biegert, Ed

    2014-06-01

    The central Red Sea rift is considered to be an embryonic ocean. It is characterised by high heat flow, with more than 90% of the heat flow measurements exceeding the world mean and high values extending to the coasts - providing good prospects for geothermal energy resources. In this study, we aim to map the depth to the Curie isotherm (580 °C) in the central Red Sea based on magnetic data. A modified spectral analysis technique, the “de-fractal spectral depth method” is developed and used to estimate the top and bottom boundaries of the magnetised layer. We use a mathematical relationship between the observed power spectrum due to fractal magnetisation and an equivalent random magnetisation power spectrum. The de-fractal approach removes the effect of fractal magnetisation from the observed power spectrum and estimates the parameters of depth to top and depth to bottom of the magnetised layer using iterative forward modelling of the power spectrum. We applied the de-fractal approach to 12 windows of magnetic data along a profile across the central Red Sea from onshore Sudan to onshore Saudi Arabia. The results indicate variable magnetic bottom depths ranging from 8.4 km in the rift axis to about 18.9 km in the marginal areas. Comparison of these depths with published Moho depths, based on seismic refraction constrained 3D inversion of gravity data, showed that the magnetic bottom in the rift area corresponds closely to the Moho, whereas in the margins it is considerably shallower than the Moho. Forward modelling of heat flow data suggests that depth to the Curie isotherm in the centre of the rift is also close to the Moho depth. Thus Curie isotherm depths estimated from magnetic data may well be imaging the depth to the Curie temperature along the whole profile. Geotherms constrained by the interpreted Curie isotherm depths have subsequently been calculated at three points across the rift - indicating the variation in the likely temperature profile with depth.

  7. Source depth dependence of micro-tsunamis recorded with ocean-bottom pressure gauges: The January 28, 2000 Mw 6.8 earthquake off Nemuro Peninsula, Japan

    USGS Publications Warehouse

    Hirata, K.; Takahashi, H.; Geist, E.; Satake, K.; Tanioka, Y.; Sugioka, H.; Mikada, H.

    2003-01-01

    Micro-tsunami waves with a maximum amplitude of 4-6 mm were detected with the ocean-bottom pressure gauges on a cabled deep seafloor observatory south of Hokkaido, Japan, following the January 28, 2000 earthquake (Mw 6.8) in the southern Kuril subduction zone. We model the observed micro-tsunami and estimate the focal depth and other source parameters such as fault length and amount of slip using grid searching with the least-squares method. The source depth and stress drop for the January 2000 earthquake are estimated to be 50 km and 7 MPa, respectively, with possible ranges of 45-55 km and 4-13 MPa. Focal depth of typical inter-plate earthquakes in this region ranges from 10 to 20 km and stress drop of inter-plate earthquakes generally is around 3 MPa. The source depth and stress drop estimates suggest that the earthquake was an intra-slab event in the subducting Pacific plate, rather than an inter-plate event. In addition, for a prescribed fault width of 30 km, the fault length is estimated to be 15 km, with possible ranges of 10-20 km, which is the same as the previously determined aftershock distribution. The corresponding estimate for seismic moment is 2.7x1019 Nm with possible ranges of 2.3x1019-3.2x1019Nm. Standard tide gauges along the nearby coast did not record any tsunami signal. High-precision ocean-bottom pressure measurements offshore thus make it possible to determine fault parameters of moderate-sized earthquakes in subduction zones using open-ocean tsunami waveforms. Published by Elsevier Science B. V.

  8. A method for estimating the diffuse attenuation coefficient (KdPAR)from paired temperature sensors

    USGS Publications Warehouse

    Read, Jordan S.; Rose, Kevin C.; Winslow, Luke A.; Read, Emily K.

    2015-01-01

    A new method for estimating the diffuse attenuation coefficient for photosynthetically active radiation (KdPAR) from paired temperature sensors was derived. We show that during cases where the attenuation of penetrating shortwave solar radiation is the dominant source of temperature changes, time series measurements of water temperatures at multiple depths (z1 and z2) are related to one another by a linear scaling factor (a). KdPAR can then be estimated by the simple equation KdPAR ln(a)/(z2/z1). A suggested workflow is presented that outlines procedures for calculating KdPAR according to this paired temperature sensor (PTS) method. This method is best suited for conditions when radiative temperature gains are large relative to physical noise. These conditions occur frequently on water bodies with low wind and/or high KdPARs but can be used for other types of lakes during time periods of low wind and/or where spatially redundant measurements of temperatures are available. The optimal vertical placement of temperature sensors according to a priori knowledge of KdPAR is also described. This information can be used to inform the design of future sensor deployments using the PTS method or for campaigns where characterizing sub-daily changes in temperatures is important. The PTS method provides a novel method to characterize light attenuation in aquatic ecosystems without expensive radiometric equipment or the user subjectivity inherent in Secchi depth measurements. This method also can enable the estimation of KdPAR at higher frequencies than many manual monitoring programs allow.

  9. Solutions for the diurnally forced advection-diffusion equation to estimate bulk fluid velocity and diffusivity in streambeds from temperature time series

    Treesearch

    Charles H. Luce; Daniele Tonina; Frank Gariglio; Ralph Applebee

    2013-01-01

    Work over the last decade has documented methods for estimating fluxes between streams and streambeds from time series of temperature at two depths in the streambed. We present substantial extension to the existing theory and practice of using temperature time series to estimate streambed water fluxes and thermal properties, including (1) a new explicit analytical...

  10. Discoloration of polyvinyl chloride (PVC) tape as a proxy for water-table depth in peatlands: validation and assessment of seasonal variability

    USGS Publications Warehouse

    Booth, Robert K.; Hotchkiss, Sara C.; Wilcox, Douglas A.

    2005-01-01

    Summary: 1. Discoloration of polyvinyl chloride (PVC) tape has been used in peatland ecological and hydrological studies as an inexpensive way to monitor changes in water-table depth and reducing conditions. 2. We investigated the relationship between depth of PVC tape discoloration and measured water-table depth at monthly time steps during the growing season within nine kettle peatlands of northern Wisconsin. Our specific objectives were to: (1) determine if PVC discoloration is an accurate method of inferring water-table depth in Sphagnum-dominated kettle peatlands of the region; (2) assess seasonal variability in the accuracy of the method; and (3) determine if systematic differences in accuracy occurred among microhabitats, PVC tape colour and peatlands. 3. Our results indicated that PVC tape discoloration can be used to describe gradients of water-table depth in kettle peatlands. However, accuracy differed among the peatlands studied, and was systematically biased in early spring and late summer/autumn. Regardless of the month when the tape was installed, the highest elevations of PVC tape discoloration showed the strongest correlation with midsummer (around July) water-table depth and average water-table depth during the growing season. 4. The PVC tape discoloration method should be used cautiously when precise estimates are needed of seasonal changes in the water-table.

  11. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling

    PubMed Central

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-01-01

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028

  12. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.

    PubMed

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-09-27

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

  13. Energy dissipation of slot-type flip buckets

    NASA Astrophysics Data System (ADS)

    Wu, Jian-hua; Li, Shu-fang; Ma, Fei

    2018-03-01

    The energy dissipation is a key index in the evaluation of energy dissipation elements. In the present work, a flip bucket with a slot, called the slot-type flip bucket, is theoretically and experimentally investigated by the method of estimating the energy dissipation. The theoretical analysis shows that, in order to have the energy dissipation, it is necessary to determine the sequent flow depth h 1 and the flow speed V 1 at the corresponding position through the flow depth h 2 after the hydraulic jump. The relative flow depth h 2 / h 。 is a function of the approach flow Froude number Fr 。, the relative slot width b/B 。, and the relative slot angle θ/β. The expression for estimating the energy dissipation is developed, and the maximum error is not larger than 9.21%.

  14. Energy dissipation of slot-type flip buckets

    NASA Astrophysics Data System (ADS)

    Wu, Jian-hua; Li, Shu-fang; Ma, Fei

    2018-04-01

    The energy dissipation is a key index in the evaluation of energy dissipation elements. In the present work, a flip bucket with a slot, called the slot-type flip bucket, is theoretically and experimentally investigated by the method of estimating the energy dissipation. The theoretical analysis shows that, in order to have the energy dissipation, it is necessary to determine the sequent flow depth h 1 and the flow speed V 1 at the corresponding position through the flow depth h 2 after the hydraulic jump. The relative flow depth h 2 / h o is a function of the approach flow Froude number Fr o, the relative slot width b/ B o, and the relative slot angle θ/ β. The expression for estimating the energy dissipation is developed, and the maximum error is not larger than 9.21%.

  15. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  16. All-weather Land Surface Temperature Estimation from Satellite Data

    NASA Astrophysics Data System (ADS)

    Zhou, J.; Zhang, X.

    2017-12-01

    Satellite remote sensing, including the thermal infrared (TIR) and passive microwave (MW), provides the possibility to observe LST at large scales. For better modeling the land surface processes with high temporal resolutions, all-weather LST from satellite data is desirable. However, estimation of all-weather LST faces great challenges. On the one hand, TIR remote sensing is limited to clear-sky situations; this drawback reduces its usefulness under cloudy conditions considerably, especially in regions with frequent and/or permanent clouds. On the other hand, MW remote sensing suffers from much greater thermal sampling depth (TSD) and coarser spatial resolution than TIR; thus, MW LST is generally lower than TIR LST, especially at daytime. Two case studies addressing the challenges mentioned previously are presented here. The first study is for the development of a novel thermal sampling depth correction method (TSDC) to estimate the MW LST over barren land; this second study is for the development of a feasible method to merge the TIR and MW LSTs by addressing the coarse resolution of the latter one. In the first study, the core of the TSDC method is a new formulation of the passive microwave radiation balance equation, which allows linking bulk MW radiation to the soil temperature at a specific depth, i.e. the representative temperature: this temperature is then converted to LST through an adapted soil heat conduction equation. The TSDC method is applied to the 6.9 GHz channel in vertical polarization of AMSR-E. Evaluation shows that LST estimated by the TSDC method agrees well with the MODIS LST. Validation is based on in-situ LSTs measured at the Gobabeb site in western Namibia. The results demonstrate the high accuracy of the TSDC method: it yields a root-mean squared error (RMSE) of 2 K and ignorable systematic error over barren land. In the second study, the method consists of two core processes: (1) estimation of MW LST from MW brightness temperature and (2) three-time-scale decomposition of LST. The method is applied to two MW sensors (i.e. AMSR-E and AMSR2) and MODIS in northeast China and its surrounding area, with dominating land covers of forest and cropland. By comparing against the in-situ LST and surface air temperature, we find the merged LST has similar accuracy to the MODIS LST in version 6 and good image quality.

  17. A comparison of two methods for quantifying soil organic carbon of alpine grasslands on the Tibetan Plateau.

    PubMed

    Chen, Litong; Flynn, Dan F B; Jing, Xin; Kühn, Peter; Scholten, Thomas; He, Jin-Sheng

    2015-01-01

    As CO2 concentrations continue to rise and drive global climate change, much effort has been put into estimating soil carbon (C) stocks and dynamics over time. However, the inconsistent methods employed by researchers hamper the comparability of such works, creating a pressing need to standardize the methods for soil organic C (SOC) quantification by the various methods. Here, we collected 712 soil samples from 36 sites of alpine grasslands on the Tibetan Plateau covering different soil depths and vegetation and soil types. We used an elemental analyzer for soil total C (STC) and an inorganic carbon analyzer for soil inorganic C (SIC), and then defined the difference between STC and SIC as SOCCNS. In addition, we employed the modified Walkley-Black (MWB) method, hereafter SOCMWB. Our results showed that there was a strong correlation between SOCCNS and SOCMWB across the data set, given the application of a correction factor of 1.103. Soil depth and soil type significantly influenced on the recovery, defined as the ratio of SOCMWB to SOCCNS, and the recovery was closely associated with soil carbonate content and pH value as well. The differences of recovery between alpine meadow and steppe were largely driven by soil pH. In addition, statistically, a relatively strong correlation between SOCCNS and STC was also found, suggesting that it is feasible to estimate SOCCNS stocks through the STC data across the Tibetan grasslands. Therefore, our results suggest that in order to accurately estimate the absolute SOC stocks and its change in the Tibetan alpine grasslands, adequate correction of the modified WB measurements is essential with correct consideration of the effects of soil types, vegetation, soil pH and soil depth.

  18. A Comparison of Two Methods for Quantifying Soil Organic Carbon of Alpine Grasslands on the Tibetan Plateau

    PubMed Central

    Chen, Litong; Flynn, Dan F. B.; Jing, Xin; Kühn, Peter; Scholten, Thomas; He, Jin-Sheng

    2015-01-01

    As CO2 concentrations continue to rise and drive global climate change, much effort has been put into estimating soil carbon (C) stocks and dynamics over time. However, the inconsistent methods employed by researchers hamper the comparability of such works, creating a pressing need to standardize the methods for soil organic C (SOC) quantification by the various methods. Here, we collected 712 soil samples from 36 sites of alpine grasslands on the Tibetan Plateau covering different soil depths and vegetation and soil types. We used an elemental analyzer for soil total C (STC) and an inorganic carbon analyzer for soil inorganic C (SIC), and then defined the difference between STC and SIC as SOCCNS. In addition, we employed the modified Walkley-Black (MWB) method, hereafter SOCMWB. Our results showed that there was a strong correlation between SOCCNS and SOCMWB across the data set, given the application of a correction factor of 1.103. Soil depth and soil type significantly influenced on the recovery, defined as the ratio of SOCMWB to SOCCNS, and the recovery was closely associated with soil carbonate content and pH value as well. The differences of recovery between alpine meadow and steppe were largely driven by soil pH. In addition, statistically, a relatively strong correlation between SOCCNS and STC was also found, suggesting that it is feasible to estimate SOCCNS stocks through the STC data across the Tibetan grasslands. Therefore, our results suggest that in order to accurately estimate the absolute SOC stocks and its change in the Tibetan alpine grasslands, adequate correction of the modified WB measurements is essential with correct consideration of the effects of soil types, vegetation, soil pH and soil depth. PMID:25946085

  19. Application of the Shiono and Knight Method in asymmetric compound channels with different side slopes of the internal wall

    NASA Astrophysics Data System (ADS)

    Alawadi, Wisam; Al-Rekabi, Wisam S.; Al-Aboodi, Ali H.

    2018-03-01

    The Shiono and Knight Method (SKM) is widely used to predict the lateral distribution of depth-averaged velocity and boundary shear stress for flows in compound channels. Three calibrating coefficients need to be estimated for applying the SKM, namely eddy viscosity coefficient ( λ), friction factor ( f) and secondary flow coefficient ( k). There are several tested methods which can satisfactorily be used to estimate λ, f. However, the calibration of secondary flow coefficients k to account for secondary flow effects correctly is still problematic. In this paper, the calibration of secondary flow coefficients is established by employing two approaches to estimate correct values of k for simulating asymmetric compound channel with different side slopes of the internal wall. The first approach is based on Abril and Knight (2004) who suggest fixed values for main channel and floodplain regions. In the second approach, the equations developed by Devi and Khatua (2017) that relate the variation of the secondary flow coefficients with the relative depth ( β) and width ratio ( α) are used. The results indicate that the calibration method developed by Devi and Khatua (2017) is a better choice for calibrating the secondary flow coefficients than using the first approach which assumes a fixed value of k for different flow depths. The results also indicate that the boundary condition based on the shear force continuity can successfully be used for simulating rectangular compound channels, while the continuity of depth-averaged velocity and its gradient is accepted boundary condition in simulations of trapezoidal compound channels. However, the SKM performance for predicting the boundary shear stress over the shear layer region may not be improved by only imposing the suitable calibrated values of secondary flow coefficients. This is because difficulties of modelling the complex interaction that develops between the flows in the main channel and on the floodplain in this region.

  20. Mapping snow depth distribution in forested terrain using unmanned aerial vehicles and structure-from-motion

    NASA Astrophysics Data System (ADS)

    Webster, C.; Bühler, Y.; Schirmer, M.; Stoffel, A.; Giulia, M.; Jonas, T.

    2017-12-01

    Snow depth distribution in forests exhibits strong spatial heterogeneity compared to adjacent open sites. Measurement of snow depths in forests is currently limited to a) manual point measurements, which are sparse and time-intensive, b) ground-penetrating radar surveys, which have limited spatial coverage, or c) airborne LiDAR acquisition, which are expensive and may deteriorate in denser forests. We present the application of unmanned aerial vehicles in combination with structure-from-motion (SfM) methods to photogrammetrically map snow depth distribution in forested terrain. Two separate flights were carried out 10 days apart across a heterogeneous forested area of 900 x 500 m. Corresponding snow depth maps were derived using both, LiDAR-based and SfM-based DTM data, obtained during snow-off conditions. Manual measurements collected following each flight were used to validate the snow depth maps. Snow depths were resolved at 5cm resolution and forest snow depth distribution structures such as tree wells and other areas of preferential melt were represented well. Differential snow depth maps showed maximum ablation in the exposed south sides of trees and smaller differences in the centre of gaps and on the north side of trees. This new application of SfM to map snow depth distribution in forests demonstrates a straightforward method for obtaining information that was previously only available through manual spatially limited ground-based measurements. These methods could therefore be extended to more frequent observation of snow depths in forests as well as estimating snow accumulation and depletion rates.

  1. Application of AN Empirically Scaled Digital Echo Integrator for Assessment of Juvenile Sockeye Salmon (oncorhynchus Nerka Walbaum) Populations.

    NASA Astrophysics Data System (ADS)

    Nunnallee, Edmund Pierce, Jr.

    1980-03-01

    This dissertation consists of an investigation into the empirical scaling of a digital echo integrator for assessment of a population of juvenile sockeye salmon in Cultus Lake, British Columbia, Canada. The scaling technique was developed over the last ten years for use with totally uncalibrated but stabilized data collection and analysis equipment, and has been applied to populations of fish over a wide geographical range. This is the first investigation into the sources of bias and the accuracy of the technique, however, and constitutes a verification of the method. The initial section of the investigation describes hydroacoustic data analysis methods for estimation of effective sampling volume which is necessary for estimation of fish density. The second section consists of a computer simulation of effective sample volume estimation by this empirical method and is used to investigate the degree of bias introduced by electronic and physical parameters such as boat speed -fish depth interaction effects, electronic thresholding and saturation, transducer beam angle, fish depth stratification by size and spread of the target strength distribution of the fish. Comparisons of simulation predictions of sample volume estimation bias to actual survey results are given at the end of this section. A verification of the scaling method is then presented by comparison of a hydroacoustically derived estimation of the Cultus Lake smolt population to an independent and concurrent estimate made by counting the migrant fish as they passed through a weir in the outlet stream of the lake. Finally, the effect on conduct and accuracy of hydroacoustic assessment of juvenile sockeye salmon due to several behavioral traits are discussed. These traits include movements of presmolt fish in a lake just prior to their outmigration, daily vertical migrations and the emergence and dispersal of sockeye fry in Cultus Lake. In addition, a comparison of the summer depth preferences of the fish over their entire geographical distribution on the west coast of the U.S. and Canada are discussed in terms of hydroacoustic accessibility.

  2. Estimates of Cutoff Depths of Seismogenic Layer in Kanto Region from the High-Resolution Relocated Earthquake Catalog

    NASA Astrophysics Data System (ADS)

    Takeda, T.; Yano, T. E.; Shiomi, K.

    2013-12-01

    The highly-developed active fault evaluation is necessary particularly at the Kanto metropolitan area, where multiple major active fault zones exist. The cutoff depth of active faults is one of important parameters since it is a good indicator to define fault dimensions and hence its maximum expected magnitude. The depth is normally estimated from microseismicity, thermal structure, and depths of Curie point and Conrad discontinuity. For instance, Omuralieva et al. (2012) has estimated the cutoff depths of the whole Japan by creating a 3-D relocated hypocenter catalog. However its spatial resolution could be insufficient for the robustness of the active faults evaluation since precision within 15 km that is comparable to the minimum evaluated fault size is preferred. Therefore the spatial resolution of the earthquake catalog to estimate the cutoff depth is required to be smaller than 15 km. This year we launched the Japan Unified hIgh-resolution relocated Catalog for Earthquakes (JUICE) Project (Yano et al., this fall meeting), of which objective is to create precise and reliable earthquake catalog for all of Japan, using waveform cross-correlation data and Double-Difference relocation method (Waldhauser and Ellsworth, 2000). This catalog has higher precision of hypocenter determination than the routine one. In this study, we estimate high-resolution cutoff depths of seismogenic layer using this catalog of the Kanto region where preliminary JUICE analysis has been already done. D90, the cutoff depths which contain 90% of the occuring earthquake is often used as a reference to understand the seismogenic layer. The reason of choosing 90% is because it relies on uncertainties based on the amount of depth errors of hypocenters.. In this study we estimate D95 because more precise and reliable catalog is now available by the JUICE project. First we generate 10 km equally spaced grid in our study area. Second we pick hypocenters within a radius of 10 km from each grid point and arrange into hypocenter groups. Finally we estimate D95 from the hypocenter groups at each grid point. During the analysis we use three conditions; (1) the depths of the hypocenters we used are less than 25 km; (2) the minimum number of the hypocenter group is 25; and (3) low frequency earthquakes are excluded. Our estimate of D95 shows undulated and fine features, such as having a different profile along the same fault. This can be seen at two major fault zones: (1) Tachikawa fault zone, and (2) the northwest marginal fault zone of the Kanto basin. The D95 gets deeper from northwest to southwest along these fault zones, , suggesting that the constant cutoff depth cannot be used even along the same fault zone. One of patters of our D95 shows deeper in the south Kanto region. The reason for this pattern could be that hypocenters we used in this study may be contaminated by seismicity near the plate boundary between the Philippine Sea plate and the Eurasian plate. Therefore we should carefully interpret D95 in the south Kanto.

  3. Determining Accuracy of Thermal Dissipation Methods-based Sap Flux in Japanese Cedar Trees

    NASA Astrophysics Data System (ADS)

    Su, Man-Ping; Shinohara, Yoshinori; Laplace, Sophie; Lin, Song-Jin; Kume, Tomonori

    2017-04-01

    Thermal dissipation method, one kind of sap flux measurement method that can estimate individual tree transpiration, have been widely used because of its low cost and uncomplicated operation. Although thermal dissipation method is widespread, the accuracy of this method is doubted recently because some tree species materials in previous studies were not suitable for its empirical formula from Granier due to difference of wood characteristics. In Taiwan, Cryptomeria japonica (Japanese cedar) is one of the dominant species in mountainous area, quantifying the transpiration of Japanese cedar trees is indispensable to understand water cycling there. However, no one have tested the accuracy of thermal dissipation methods-based sap flux for Japanese cedar trees in Taiwan. Thus, in this study we conducted calibration experiment using twelve Japanese cedar stem segments from six trees to investigate the accuracy of thermal dissipation methods-based sap flux in Japanese cedar trees in Taiwan. By pumping water from segment bottom to top and inserting probes into segments to collect data simultaneously, we compared sap flux densities calculated from real water uptakes (Fd_actual) and empirical formula (Fd_Granier). Exact sapwood area and sapwood depth of each sample were obtained from dying segment with safranin stain solution. Our results showed that Fd_Granier underestimated 39 % of Fd_actual across sap flux densities ranging from 10 to 150 (cm3m-2s-1); while applying sapwood depth corrected formula from Clearwater, Fd_Granier became accurately that only underestimated 0.01 % of Fd_actual. However, when sap flux densities ranging from 10 to 50 (cm3m-2s-1)which is similar with the field data of Japanese cedar trees in a mountainous area of Taiwan, Fd_Granier underestimated 51 % of Fd_actual, and underestimated 26 % with applying Clearwater sapwood depth corrected formula. These results suggested sapwood depth significantly impacted on the accuracy of thermal dissipation method; hence, careful determination of sapwood depth is the key for the accurate transpiration estimates. This study also apply the derived results to long-term field data in the mountainous area in Taiwan.

  4. Isostatic GOCE Moho model for Iran

    NASA Astrophysics Data System (ADS)

    Eshagh, Mehdi; Ebadi, Sahar; Tenzer, Robert

    2017-05-01

    One of the major issues associated with a regional Moho recovery from the gravity or gravity-gradient data is the optimal choice of the mean compensation depth (i.e., the mean Moho depth) for a certain area of study, typically for orogens characterised by large Moho depth variations. In case of selecting a small value of the mean compensation depth, the pattern of deep Moho structure might not be reproduced realistically. Moreover, the definition of the mean compensation depth in existing isostatic models affects only low-degrees of the Moho spectrum. To overcome this problem, in this study we reformulate the Sjöberg and Jeffrey's methods of solving the Vening-Meinesz isostatic problem so that the mean compensation depth contributes to the whole Moho spectrum. Both solutions are then defined for the vertical gravity gradient, allowing estimating the Moho depth from the GOCE satellite gravity-gradiometry data. Moreover, gravimetric solutions provide realistic results only when a priori information on the crust and upper mantle structure is known (usually from seismic surveys) with a relatively good accuracy. To investigate this aspect, we formulate our gravimetric solutions for a variable Moho density contrast to account for variable density of the uppermost mantle below the Moho interface, while taking into consideration also density variations within the sediments and consolidated crust down to the Moho interface. The developed theoretical models are applied to estimate the Moho depth from GOCE data at the regional study area of the Iranian tectonic block, including also parts of surrounding tectonic features. Our results indicate that the regional Moho depth differences between Sjöberg and Jeffrey's solutions, reaching up to about 3 km, are caused by a smoothing effect of Sjöberg's method. The validation of our results further shows a relatively good agreement with regional seismic studies over most of the continental crust, but large discrepancies are detected under the Oman Sea and the Makran subduction zone. We explain these discrepancies by a low quality of seismic data offshore.

  5. An evaluation method of the profile of plasma-induced defects based on capacitance-voltage measurement

    NASA Astrophysics Data System (ADS)

    Okada, Yukimasa; Ono, Kouichi; Eriguchi, Koji

    2017-06-01

    Aggressive shrinkage and geometrical transition to three-dimensional structures in metal-oxide-semiconductor field-effect transistors (MOSFETs) lead to potentially serious problems regarding plasma processing such as plasma-induced physical damage (PPD). For the precise control of material processing and future device designs, it is extremely important to clarify the depth and energy profiles of PPD. Conventional methods to estimate the PPD profile (e.g., wet etching) are time-consuming. In this study, we propose an advanced method using a simple capacitance-voltage (C-V) measurement. The method first assumes the depth and energy profiles of defects in Si substrates, and then optimizes the C-V curves. We applied this methodology to evaluate the defect generation in (100), (111), and (110) Si substrates. No orientation dependence was found regarding the surface-oxide layers, whereas a large number of defects was assigned in the case of (110). The damaged layer thickness and areal density were estimated. This method provides the highly sensitive PPD prediction indispensable for designing future low-damage plasma processes.

  6. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  7. Reconstruction of the geometry of volcanic vents by trajectory tracking of fast ejecta - the case of the Eyjafjallajökull 2010 eruption (Iceland)

    NASA Astrophysics Data System (ADS)

    Dürig, Tobias; Gudmundsson, Magnus T.; Dellino, Pierfrancesco

    2015-05-01

    Two methods are introduced to estimate the depth of origin of ejecta trajectories (depth to magma level in conduit) and the diameter of a conduit in an erupting crater, using analysis of videos from the Eyjafjallajökull 2010 eruption to evaluate their applicability. Both methods rely on the identification of straight, initial trajectories of fast ejecta, observed near the crater rims before they are appreciably bent by air drag and gravity. In the first method, through tracking these straight trajectories and identifying a cut-off angle, the inner diameter and the depth level of the vent can be constrained. In the second method, the intersection point of straight trajectories from individual pulses is used to determine the maximum possible depth from which the tracked ejecta originated and the width of the region from which the pulses emanated. The two methods give nearly identical results on the depth to magma level in the crater of Eyjafjallajökull on 8 to 10 May of 51 ± 7 m. The inner vent diameter, at the level of origin of the pulses and ejecta, is found to have been 8 to 15 m. These methods open up the possibility to feed (near) real-time monitoring systems with otherwise inaccessible information about vent geometry during an ongoing eruption and help defining important eruption source parameters.

  8. Estimating locations and total magnetization vectors of compact magnetic sources from scalar, vector, or tensor magnetic measurements through combined Helbig and Euler analysis

    USGS Publications Warehouse

    Phillips, J.D.; Nabighian, M.N.; Smith, D.V.; Li, Y.

    2007-01-01

    The Helbig method for estimating total magnetization directions of compact sources from magnetic vector components is extended so that tensor magnetic gradient components can be used instead. Depths of the compact sources can be estimated using the Euler equation, and their dipole moment magnitudes can be estimated using a least squares fit to the vector component or tensor gradient component data. ?? 2007 Society of Exploration Geophysicists.

  9. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  10. Towards the Moho depth and Moho density contrast along with their uncertainties from seismic and satellite gravity observations

    NASA Astrophysics Data System (ADS)

    Abrehdary, M.; Sjöberg, L. E.; Bagherbandi, M.; Sampietro, D.

    2017-12-01

    We present a combined method for estimating a new global Moho model named KTH15C, containing Moho depth and Moho density contrast (or shortly Moho parameters), from a combination of global models of gravity (GOCO05S), topography (DTM2006) and seismic information (CRUST1.0 and MDN07) to a resolution of 1° × 1° based on a solution of Vening Meinesz-Moritz' inverse problem of isostasy. This paper also aims modelling of the observation standard errors propagated from the Vening Meinesz-Moritz and CRUST1.0 models in estimating the uncertainty of the final Moho model. The numerical results yield Moho depths ranging from 6.5 to 70.3 km, and the estimated Moho density contrasts ranging from 21 to 650 kg/m3, respectively. Moreover, test computations display that in most areas estimated uncertainties in the parameters are less than 3 km and 50 kg/m3, respectively, but they reach to more significant values under Gulf of Mexico, Chile, Eastern Mediterranean, Timor sea and parts of polar regions. Comparing the Moho depths estimated by KTH15C and those derived by KTH11C, GEMMA2012C, CRUST1.0, KTH14C, CRUST14 and GEMMA1.0 models shows that KTH15C agree fairly well with CRUST1.0 but rather poor with other models. The Moho density contrasts estimated by KTH15C and those of the KTH11C, KTH14C and VMM model agree to 112, 31 and 61 kg/m3 in RMS. The regional numerical studies show that the RMS differences between KTH15C and Moho depths from seismic information yields fits of 2 to 4 km in South and North America, Africa, Europe, Asia, Australia and Antarctica, respectively.

  11. Repeatability and Accuracy of Exoplanet Eclipse Depths Measured with Post-cryogenic Spitzer

    NASA Astrophysics Data System (ADS)

    Ingalls, James G.; Krick, J. E.; Carey, S. J.; Stauffer, John R.; Lowrance, Patrick J.; Grillmair, Carl J.; Buzasi, Derek; Deming, Drake; Diamond-Lowe, Hannah; Evans, Thomas M.; Morello, G.; Stevenson, Kevin B.; Wong, Ian; Capak, Peter; Glaccum, William; Laine, Seppo; Surace, Jason; Storrie-Lombardi, Lisa

    2016-08-01

    We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. We have re-analyzed an existing 4.5 μm data set, consisting of 10 observations of the XO-3b system during secondary eclipse, using seven different techniques for removing correlated noise. We find that, on average, for a given technique, the eclipse depth estimate is repeatable from epoch to epoch to within 156 parts per million (ppm). Most techniques derive eclipse depths that do not vary by more than a factor 3 of the photon noise limit. All methods but one accurately assess their own errors: for these methods, the individual measurement uncertainties are comparable to the scatter in eclipse depths over the 10 epoch sample. To assess the accuracy of the techniques as well as to clarify the difference between instrumental and other sources of measurement error, we have also analyzed a simulated data set of 10 visits to XO-3b, for which the eclipse depth is known. We find that three of the methods (BLISS mapping, Pixel Level Decorrelation, and Independent Component Analysis) obtain results that are within three times the photon limit of the true eclipse depth. When averaged over the 10 epoch ensemble, 5 out of 7 techniques come within 60 ppm of the true value. Spitzer exoplanet data, if obtained following current best practices and reduced using methods such as those described here, can measure repeatable and accurate single eclipse depths, with close to photon-limited results.

  12. Continuous wavelet transform and Euler deconvolution method and their application to magnetic field data of Jharia coalfield, India

    NASA Astrophysics Data System (ADS)

    Singh, Arvind; Singh, Upendra Kumar

    2017-02-01

    This paper deals with the application of continuous wavelet transform (CWT) and Euler deconvolution methods to estimate the source depth using magnetic anomalies. These methods are utilized mainly to focus on the fundamental issue of mapping the major coal seam and locating tectonic lineaments. The main aim of the study is to locate and characterize the source of the magnetic field by transferring the data into an auxiliary space by CWT. The method has been tested on several synthetic source anomalies and finally applied to magnetic field data from Jharia coalfield, India. Using magnetic field data, the mean depth of causative sources points out the different lithospheric depth over the study region. Also, it is inferred that there are two faults, namely the northern boundary fault and the southern boundary fault, which have an orientation in the northeastern and southeastern direction respectively. Moreover, the central part of the region is more faulted and folded than the other parts and has sediment thickness of about 2.4 km. The methods give mean depth of the causative sources without any a priori information, which can be used as an initial model in any inversion algorithm.

  13. Automatic processing of high-rate, high-density multibeam echosounder data

    NASA Astrophysics Data System (ADS)

    Calder, B. R.; Mayer, L. A.

    2003-06-01

    Multibeam echosounders (MBES) are currently the best way to determine the bathymetry of large regions of the seabed with high accuracy. They are becoming the standard instrument for hydrographic surveying and are also used in geological studies, mineral exploration and scientific investigation of the earth's crustal deformations and life cycle. The significantly increased data density provided by an MBES has significant advantages in accurately delineating the morphology of the seabed, but comes with the attendant disadvantage of having to handle and process a much greater volume of data. Current data processing approaches typically involve (computer aided) human inspection of all data, with time-consuming and subjective assessment of all data points. As data rates increase with each new generation of instrument and required turn-around times decrease, manual approaches become unwieldy and automatic methods of processing essential. We propose a new method for automatically processing MBES data that attempts to address concerns of efficiency, objectivity, robustness and accuracy. The method attributes each sounding with an estimate of vertical and horizontal error, and then uses a model of information propagation to transfer information about the depth from each sounding to its local neighborhood. Embedded in the survey area are estimation nodes that aim to determine the true depth at an absolutely defined location, along with its associated uncertainty. As soon as soundings are made available, the nodes independently assimilate propagated information to form depth hypotheses which are then tracked and updated on-line as more data is gathered. Consequently, we can extract at any time a "current-best" estimate for all nodes, plus co-located uncertainties and other metrics. The method can assimilate data from multiple surveys, multiple instruments or repeated passes of the same instrument in real-time as data is being gathered. The data assimilation scheme is sufficiently robust to deal with typical survey echosounder errors. Robustness is improved by pre-conditioning the data, and allowing the depth model to be incrementally defined. A model monitoring scheme ensures that inconsistent data are maintained as separate but internally consistent depth hypotheses. A disambiguation of these competing hypotheses is only carried out when required by the user. The algorithm has a low memory footprint, runs faster than data can currently be gathered, and is suitable for real-time use. We call this algorithm CUBE (Combined Uncertainty and Bathymetry Estimator). We illustrate CUBE on two data sets gathered in shallow water with different instruments and for different purposes. We show that the algorithm is robust to even gross failure modes, and reliably processes the vast majority of the data. In both cases, we confirm that the estimates made by CUBE are statistically similar to those generated by hand.

  14. Uncertainty in Estimates of Net Seasonal Snow Accumulation on Glaciers from In Situ Measurements

    NASA Astrophysics Data System (ADS)

    Pulwicki, A.; Flowers, G. E.; Radic, V.

    2017-12-01

    Accurately estimating the net seasonal snow accumulation (or "winter balance") on glaciers is central to assessing glacier health and predicting glacier runoff. However, measuring and modeling snow distribution is inherently difficult in mountainous terrain, resulting in high uncertainties in estimates of winter balance. Our work focuses on uncertainty attribution within the process of converting direct measurements of snow depth and density to estimates of winter balance. We collected more than 9000 direct measurements of snow depth across three glaciers in the St. Elias Mountains, Yukon, Canada in May 2016. Linear regression (LR) and simple kriging (SK), combined with cross correlation and Bayesian model averaging, are used to interpolate estimates of snow water equivalent (SWE) from snow depth and density measurements. Snow distribution patterns are found to differ considerably between glaciers, highlighting strong inter- and intra-basin variability. Elevation is found to be the dominant control of the spatial distribution of SWE, but the relationship varies considerably between glaciers. A simple parameterization of wind redistribution is also a small but statistically significant predictor of SWE. The SWE estimated for one study glacier has a short range parameter (90 m) and both LR and SK estimate a winter balance of 0.6 m w.e. but are poor predictors of SWE at measurement locations. The other two glaciers have longer SWE range parameters ( 450 m) and due to differences in extrapolation, SK estimates are more than 0.1 m w.e. (up to 40%) lower than LR estimates. By using a Monte Carlo method to quantify the effects of various sources of uncertainty, we find that the interpolation of estimated values of SWE is a larger source of uncertainty than the assignment of snow density or than the representation of the SWE value within a terrain model grid cell. For our study glaciers, the total winter balance uncertainty ranges from 0.03 (8%) to 0.15 (54%) m w.e. depending primarily on the interpolation method. Despite the challenges associated with accurately and precisely estimating winter balance, our results are consistent with the previously reported regional accumulation gradient.

  15. Determination of thermal wave reflection coefficient to better estimate defect depth using pulsed thermography

    NASA Astrophysics Data System (ADS)

    Sirikham, Adisorn; Zhao, Yifan; Mehnen, Jörn

    2017-11-01

    Thermography is a promising method for detecting subsurface defects, but accurate measurement of defect depth is still a big challenge because thermographic signals are typically corrupted by imaging noise and affected by 3D heat conduction. Existing methods based on numerical models are susceptible to signal noise and methods based on analytical models require rigorous assumptions that usually cannot be satisfied in practical applications. This paper presents a new method to improve the measurement accuracy of subsurface defect depth through determining the thermal wave reflection coefficient directly from observed data that is usually assumed to be pre-known. This target is achieved through introducing a new heat transfer model that includes multiple physical parameters to better describe the observed thermal behaviour in pulsed thermographic inspection. Numerical simulations are used to evaluate the performance of the proposed method against four selected state-of-the-art methods. Results show that the accuracy of depth measurement has been improved up to 10% when noise level is high and thermal wave reflection coefficients is low. The feasibility of the proposed method in real data is also validated through a case study on characterising flat-bottom holes in carbon fibre reinforced polymer (CFRP) laminates which has a wide application in various sectors of industry.

  16. A new automated method for the determination of cross-section limits in ephemeral gullies

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Ángel Campo-Bescós, Miguel; Casalí, Javier; Giménez, Rafael

    2017-04-01

    The assessment of gully erosion relies on the estimation of the soil volume enclosed by cross sections limits. Both 3D and 2D methods require the application of a methodology for the determination of the cross-section limits what has been traditionally carried out in two ways: a) by visual inspection of the cross-section by a certain expert operator; b) by the automated identification of thresholds for different geometrical variables such as elevation, slope or plan curvature obtained from the cross-section profile. However, for these last methods, typically, the thresholds are not of general application because they depend on absolute values valid only for the local gully conditions where they were derived. In this communication we evaluate an automated method for cross-section delimitation of ephemeral gullies and compare its performance with the visual assessment provided by five scientists experienced in gully erosion assessment, defining gully width, depth and area for a total of 60 ephemeral gullies cross-sections obtained from field surveys conducted on agricultural plots in Navarra (Spain). The automated method only depends on the calculation of a simple geometrical measurement, which is the bank trapezoid area for every point of each gully bank. This rectangle trapezoid (right-angled trapezoid) is defined by the elevation of a given point, the minimum elevation and the extremes of the cross-section. The gully limit for each bank is determined by the point in the bank with the maximum trapezoid area. The comparison of the estimates among the different expert operators showed large variation coefficients (up to 70%) in a number of cross-sections, larger for cross sections width and area and smaller for cross sections depth. The automated method produced comparable results to those obtained by the experts and was the procedure with the highest average correlation with the rest of the methods for the three dimensional parameters. The errors of the automated method when compared with the average estimate of the experts were occasionally high (up to 40%), in line with the variability found among experts. The automated method showed no apparent systematic errors which approximately followed a normal distribution, although these errors were slightly biased towards overestimation for the depth and area parameters. In conclusion, this study shows that there is not a single definition of gully limits even among gully experts where a large variability was found. The bank trapezoid method was found to be an automated, easy-to-use (readily implementable in a basic excel spread-sheet or programming scripts), threshold-independent procedure to determine consistently gully limits similar to expert-derived estimates. Gully width and area calculations were more prone to errors than gully depth, which was the least sensitive parameter.

  17. Definition of the supraclavicular and infraclavicular nodes: implications for three-dimensional CT-based conformal radiation therapy.

    PubMed

    Madu, C N; Quint, D J; Normolle, D P; Marsh, R B; Wang, E Y; Pierce, L J

    2001-11-01

    To delineate with computed tomography (CT) the anatomic regions containing the supraclavicular (SCV) and infraclavicular (IFV) nodal groups, to define the course of the brachial plexus, to estimate the actual radiation dose received by these regions in a series of patients treated in the traditional manner, and to compare these doses to those received with an optimized dosimetric technique. Twenty patients underwent contrast material-enhanced CT for the purpose of radiation therapy planning. CT scans were used to study the location of the SCV and IFV nodal regions by using outlining of readily identifiable anatomic structures that define the nodal groups. The brachial plexus was also outlined by using similar methods. Radiation therapy doses to the SCV and IFV were then estimated by using traditional dose calculations and optimized planning. A repeated measures analysis of covariance was used to compare the SCV and IFV depths and to compare the doses achieved with the traditional and optimized methods. Coverage by the 90% isodose surface was significantly decreased with traditional planning versus conformal planning as the depth to the SCV nodes increased (P < .001). Significantly decreased coverage by using the 90% isodose surface was demonstrated for traditional planning versus conformal planning with increasing IFV depth (P = .015). A linear correlation was found between brachial plexus depth and SCV depth up to 7 cm. Conformal optimized planning provided improved dosimetric coverage compared with standard techniques.

  18. Research on a New Method of Estimating the Potential Depth of Slope Failure Using the Airborne Electromagnetic Survey

    NASA Astrophysics Data System (ADS)

    Seto, Shuji; Takahara, Teruyoshi; Kinoshita, Atsuhiko; Mizuno, Hideaki; Kawato, Katsushi; Okumura, Minoru; Kageura, Ryouta

    2017-04-01

    In Japan, at Ontake volcano in 1984 and Kurikoma volcano in 2008, parts of the volcanoes collapsed and large-scale sediment-related disasters occurred. These disasters were unrelated to volcanic eruption directly. We conducted the case studies by using the airborne electromagnetic surveys to investigate the slopes likely to induce landslides on such volcanoes. The airborne electromagnetic surveys are the effective exploration tool when we investigate in extreme environments that person can't enter and it's necessary to investigate with wide range by a short time. The surveys were conducted by using a helicopter carrying the survey instruments; this method of non-contact investigation acquires resistivity data by the electromagnetic induction. In Japan, the surveys were conducted of 15 active volcanoes where volcanic disasters could have serious social implications. These cases focused on the seeking for the possible slopes that landslides would occur. However, the depth of the slope failure was not evaluated. Therefore in the study, we proposed a new method to determine the potential depth of slope failure. First, we categorized the three characteristics as the cap rock type, the extended collapse type, and the landslide type on the basis of collapsed cases and paid attention to the slope of the cap rock type and also defined the collapse range based on the topography and geological properties. Second, we analyzed resistivity structure about collapsed cases with the differential filter and made clear that collapse occurred in the depth which resistivity suddenly changes. In other volcanoes, we could estimate failure depth by extracting the part which resistivity suddenly changes. In the study, we use the three volcanoes as the main cases, Hokkaido Komagatake, Asama Volcano, and Ontake volcano.

  19. Depth interval estimates from motion parallax and binocular disparity beyond interaction space.

    PubMed

    Gillam, Barbara; Palmisano, Stephen A; Govan, Donovan G

    2011-01-01

    Static and dynamic observers provided binocular and monocular estimates of the depths between real objects lying well beyond interaction space. On each trial, pairs of LEDs were presented inside a dark railway tunnel. The nearest LED was always 40 m from the observer, with the depth separation between LED pairs ranging from 0 up to 248 m. Dynamic binocular viewing was found to produce the greatest (ie most veridical) estimates of depth magnitude, followed next by static binocular viewing, and then by dynamic monocular viewing. (No significant depth was seen with static monocular viewing.) We found evidence that both binocular and monocular dynamic estimates of depth were scaled for the observation distance when the ground plane and walls of the tunnel were visible up to the nearest LED. We conclude that both motion parallax and stereopsis provide useful long-distance depth information and that motion-parallax information can enhance the degree of stereoscopic depth seen.

  20. Constraining Basin Depth and Fault Displacement in the Malombe Basin Using Potential Field Methods

    NASA Astrophysics Data System (ADS)

    Beresh, S. C. M.; Elifritz, E. A.; Méndez, K.; Johnson, S.; Mynatt, W. G.; Mayle, M.; Atekwana, E. A.; Laó-Dávila, D. A.; Chindandali, P. R. N.; Chisenga, C.; Gondwe, S.; Mkumbwa, M.; Kalaguluka, D.; Kalindekafe, L.; Salima, J.

    2017-12-01

    The Malombe Basin is part of the Malawi Rift which forms the southern part of the Western Branch of the East African Rift System. At its southern end, the Malawi Rift bifurcates into the Bilila-Mtakataka and Chirobwe-Ntcheu fault systems and the Lake Malombe Rift Basin around the Shire Horst, a competent block under the Nankumba Peninsula. The Malombe Basin is approximately 70km from north to south and 35km at its widest point from east to west, bounded by reversing-polarity border faults. We aim to constrain the depth of the basin to better understand displacement of each border fault. Our work utilizes two east-west gravity profiles across the basin coupled with Source Parameter Imaging (SPI) derived from a high-resolution aeromagnetic survey. The first gravity profile was done across the northern portion of the basin and the second across the southern portion. Gravity and magnetic data will be used to constrain basement depths and the thickness of the sedimentary cover. Additionally, Shuttle Radar Topography Mission (SRTM) data is used to understand the topographic expression of the fault scarps. Estimates for minimum displacement of the border faults on either side of the basin were made by adding the elevation of the scarps to the deepest SPI basement estimates at the basin borders. Our preliminary results using SPI and SRTM data show a minimum displacement of approximately 1.3km for the western border fault; the minimum displacement for the eastern border fault is 740m. However, SPI merely shows the depth to the first significantly magnetic layer in the subsurface, which may or may not be the actual basement layer. Gravimetric readings are based on subsurface density and thus circumvent issues arising from magnetic layers located above the basement; therefore expected results for our work will be to constrain more accurate basin depth by integrating the gravity profiles. Through more accurate basement depth estimates we also gain more accurate displacement estimates for the Basin's faults. Not only do the improved depth estimates serve as a proxy to the viability of hydrocarbon exploration efforts in the region, but the improved displacement estimates also provide a better understanding of extension accommodation within the Malawi Rift.

  1. A least-squares minimisation approach to depth determination from numerical second horizontal self-potential anomalies

    NASA Astrophysics Data System (ADS)

    Abdelrahman, El-Sayed Mohamed; Soliman, Khalid; Essa, Khalid Sayed; Abo-Ezz, Eid Ragab; El-Araby, Tarek Mohamed

    2009-06-01

    This paper develops a least-squares minimisation approach to determine the depth of a buried structure from numerical second horizontal derivative anomalies obtained from self-potential (SP) data using filters of successive window lengths. The method is based on using a relationship between the depth and a combination of observations at symmetric points with respect to the coordinate of the projection of the centre of the source in the plane of the measurement points with a free parameter (graticule spacing). The problem of depth determination from second derivative SP anomalies has been transformed into the problem of finding a solution to a non-linear equation of the form f(z)=0. Formulas have been derived for horizontal cylinders, spheres, and vertical cylinders. Procedures are also formulated to determine the electric dipole moment and the polarization angle. The proposed method was tested on synthetic noisy and real SP data. In the case of the synthetic data, the least-squares method determined the correct depths of the sources. In the case of practical data (SP anomalies over a sulfide ore deposit, Sariyer, Turkey and over a Malachite Mine, Jefferson County, Colorado, USA), the estimated depths of the buried structures are in good agreement with the results obtained from drilling and surface geology.

  2. A mixture model for robust registration in Kinect sensor

    NASA Astrophysics Data System (ADS)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  3. Assessing New and Old Methods in Paleomagnetic Paleothermometry: A Test Case at Mt. St. Helens, USA

    NASA Astrophysics Data System (ADS)

    Bowles, J. A.; Gerzich, D.; Jackson, M. J.

    2017-12-01

    Paleomagnetic data can be used to estimate deposit temperatures (Tdep) of pyroclastic density currents (PDCs). The typical method is to thermally demagnetize oriented lithic clasts incorporated into the PDC. If Tdep is less than the maximum Curie temperature (Tc), the clast is partially remagnetized in the PDC, and the unblocking temperature (Tub) at which this remagnetization is removed is an estimate of Tdep. In principle, juvenile clasts can also be used, and Tub-max is taken as the minimum Tdep. This all assumes blocking (Tb) and unblocking temperatures are equivalent and that the blocking spectrum remains constant through time. Recent evidence shows that Tc in many titanomagnetites is a strong function of thermal history due to a crystal-chemical reordering process. We therefore undertake a study designed to test some of these assumptions and to assess the extent to which the method may be biased by a Tb spectrum that shifts to higher T during cooling. We also explore a new magnetic technique that relies only on stratigraphic variations in Tc. Samples are from the May 18, 1980 PDCs at Mt. St. Helens, USA. Direct temperature measurements of the deposits were 297 - 367°C. At sites with oriented lithics, standard methods provide a Tdep range that overlaps with measured temperatures, but is systematically higher by a few 10s of °C. By contrast, pumice clasts all give Tdep_min estimates that greatly exceed lithic estimates and measured temperatures. We attribute this overestimate to two causes: 1) Tc and Tub systematically increase with depth as a result of the reordering process. This results in Tdep_min estimates that vary by 50°C and increase with depth. 2) MSH pumice is multi-domain, where Tub > Tb, resulting in a large overestimate in Tdep. At 5 sites, stratigraphic variations in Tc were conservatively interpreted in terms of Tdep as <300°C or >300°C. More sophisticated modeling of the time-temperature-depth evolution of Tc allows us to place tighter constraints on some deposits, and our preliminary interpretation suggests that PDC pulses became successively hotter throughout the day. This new method allows us to evaluate subtle temporal/spatial variabilities that may not be evident from direct measurements made at the surface. It also allows Tdep estimates to be made on PDCs where no lithic clasts are present.

  4. Microtremors study applying the SPAC method in Colima state, Mexico.

    NASA Astrophysics Data System (ADS)

    Vázquez Rosas, R.; Aguirre González, J.; Mijares Arellano, H.

    2007-05-01

    One of the main parts of seismic risk studies is to determine the site effect. This can be estimated by means of the microtremors measurements. From the H/V spectral ratio (Nakamura, 1989), the predominant period of the site can be estimated. Although the predominant period by itself can not represent the site effect in a wide range of frequencies and doesn't provide information of the stratigraphy. The SPAC method (Spatial Auto-Correlation Method, Aki 1957), on the other hand, is useful to estimate the stratigraphy of the site. It is based on the simultaneous recording of microtremors in several stations deployed in an instrumental array. Through the spatial autocorrelation coefficient computation, the Rayleigh wave dispersion curve can be cleared. Finally the stratigraphy model (thickness, S and P wave velocity, and density of each layer) is estimated by fitting the theoretical dispersion curve with the observed one. The theoretical dispersion curve is initially computed using a proposed model. That model is modified several times until the theoretical curve fit the observations. This method requires of a minimum of three stations where the microtremors are observed simultaneously in all the stations. We applied the SPAC method to six sites in Colima state, Mexico. Those sites are Santa Barbara, Cerro de Ortega, Tecoman, Manzanillo and two in Colima city. Totally 16 arrays were carried out using equilateral triangles with different apertures with a minimum of 5 m and a maximum of 60 m. For recording microtremors we used short period (5 seconds) velocity type vertical sensors connected to a K2 (Kinemetrics) acquisition system. We could estimate the velocities of the most superficial layers reaching different depths in each site. For Santa Bárbara site the exploration depth was about 30 m, for Tecoman 12 m, for Manzanillo 35 m, for Cerro de Ortega 68 m, and the deepest site exploration was obtained in Colima city with a depth of around 73 m. The S wave velocities fluctuate between 230 m/s and 420 m/s for the most superficial layer. It means that, in general, the most superficial layers are quite competent. The superficial layer with smaller S wave velocity was observed in Tecoman, while that of largest S wave velocity was observed in Cerro de Ortega. Our estimations are consistent with down-hole velocity records obtained in Santa Barbara by previous studies.

  5. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models

    PubMed Central

    Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir

    2016-01-01

    Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570

  6. Estimation of m.w.e (meter water equivalent) depth of the salt mine of Slanic Prahova, Romania

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitrica, B.; Margineanu, R.; Stoica, S.

    2010-11-24

    A new mobile detector was developed in IFIN-HH, Romania, for measuring muon flux at surface and in underground. The measurements have been performed in the salt mines of Slanic Prahova, Romania. The muon flux was determined for 2 different galleries of the Slanic mine at different depths. In order to test the stability of the method, also measurements of the muon flux at surface at different altitudes were performed. Based on the results, the depth of the 2 galleries was established at 610 and 790 m.w.e. respectively.

  7. Basalt depths in lunar basins using impact craters as stratigraphic probes: Evaluation of a method using orbital geochemical data

    NASA Technical Reports Server (NTRS)

    Andre, C. G.

    1986-01-01

    A rare look at the chemical composition of subsurface stratigraphy in lunar basins filled with mare basalt is possible at fresh impact craters. Mg/Al maps from orbital X-ray flourescence measurements of mare areas indicate chemical anomalies associated with materials ejected by large post-mare impacts. A method of constraining the wide-ranging estimates of mare basalt depths using the orbital MG/Al data is evaluated and the results are compared to those of investigators using different indirect methods. Chemical anomalies at impact craters within the maria indicate five locations where higher Mg/Al basalt compositions may have been excavated from beneath the surface layer. At eight other locations, low Mg/Al anomalies suggest that basin-floor material was ejected. In these two cases, the stratigraphic layers are interpreted to occur at depths less than the calculated maximum depth of excavation. In five other cases, there is no apparent chemical change between the crater and the surrounding mare surface. This suggests homogeneous basalt compositions that extend down to the depths sampled, i.e., no anorthositic material that might represent the basin floor was exposed.

  8. How Big Was It? Getting at Yield

    NASA Astrophysics Data System (ADS)

    Pasyanos, M.; Walter, W. R.; Ford, S. R.

    2013-12-01

    One of the most coveted pieces of information in the wake of a nuclear test is the explosive yield. Determining the yield from remote observations, however, is not necessarily a trivial thing. For instance, recorded observations of seismic amplitudes, used to estimate the yield, are significantly modified by the intervening media, which varies widely, and needs to be properly accounted for. Even after correcting for propagation effects such as geometrical spreading, attenuation, and station site terms, getting from the resulting source term to a yield depends on the specifics of the explosion source model, including material properties, and depth. Some formulas are based on assumptions of the explosion having a standard depth-of-burial and observed amplitudes can vary if the actual test is either significantly overburied or underburied. We will consider the complications and challenges of making these determinations using a number of standard, more traditional methods and a more recent method that we have developed using regional waveform envelopes. We will do this comparison for recent declared nuclear tests from the DPRK. We will also compare the methods using older explosions at the Nevada Test Site with announced yields, material and depths, so that actual performance can be measured. In all cases, we also strive to quantify realistic uncertainties on the yield estimation.

  9. On-Tree Mango Fruit Size Estimation Using RGB-D Images

    PubMed Central

    Wang, Zhenglin; Verma, Brijesh

    2017-01-01

    In-field mango fruit sizing is useful for estimation of fruit maturation and size distribution, informing the decision to harvest, harvest resourcing (e.g., tray insert sizes), and marketing. In-field machine vision imaging has been used for fruit count, but assessment of fruit size from images also requires estimation of camera-to-fruit distance. Low cost examples of three technologies for assessment of camera to fruit distance were assessed: a RGB-D (depth) camera, a stereo vision camera and a Time of Flight (ToF) laser rangefinder. The RGB-D camera was recommended on cost and performance, although it functioned poorly in direct sunlight. The RGB-D camera was calibrated, and depth information matched to the RGB image. To detect fruit, a cascade detection with histogram of oriented gradients (HOG) feature was used, then Otsu’s method, followed by color thresholding was applied in the CIE L*a*b* color space to remove background objects (leaves, branches etc.). A one-dimensional (1D) filter was developed to remove the fruit pedicles, and an ellipse fitting method employed to identify well-separated fruit. Finally, fruit lineal dimensions were calculated using the RGB-D depth information, fruit image size and the thin lens formula. A Root Mean Square Error (RMSE) = 4.9 and 4.3 mm was achieved for estimated fruit length and width, respectively, relative to manual measurement, for which repeated human measures were characterized by a standard deviation of 1.2 mm. In conclusion, the RGB-D method for rapid in-field mango fruit size estimation is practical in terms of cost and ease of use, but cannot be used in direct intense sunshine. We believe this work represents the first practical implementation of machine vision fruit sizing in field, with practicality gauged in terms of cost and simplicity of operation. PMID:29182534

  10. Estimating snowpack density from Albedo measurement

    Treesearch

    James L. Smith; Howard G. Halverson

    1979-01-01

    Snow is a major source of water in Western United States. Data on snow depth and average snowpack density are used in mathematical models to predict water supply. In California, about 75 percent of the snow survey sites above 2750-meter elevation now used to collect data are in statutory wilderness areas. There is need for a method of estimating the water content of a...

  11. Hand pose estimation in depth image using CNN and random forest

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Cao, Zhiguo; Xiao, Yang; Fang, Zhiwen

    2018-03-01

    Thanks to the availability of low cost depth cameras, like Microsoft Kinect, 3D hand pose estimation attracted special research attention in these years. Due to the large variations in hand`s viewpoint and the high dimension of hand motion, 3D hand pose estimation is still challenging. In this paper we propose a two-stage framework which joint with CNN and Random Forest to boost the performance of hand pose estimation. First, we use a standard Convolutional Neural Network (CNN) to regress the hand joints` locations. Second, using a Random Forest to refine the joints from the first stage. In the second stage, we propose a pyramid feature which merges the information flow of the CNN. Specifically, we get the rough joints` location from first stage, then rotate the convolutional feature maps (and image). After this, for each joint, we map its location to each feature map (and image) firstly, then crop features at each feature map (and image) around its location, put extracted features to Random Forest to refine at last. Experimentally, we evaluate our proposed method on ICVL dataset and get the mean error about 11mm, our method is also real-time on a desktop.

  12. Sublethal effects of catch-and-release fishing: measuring capture stress, fish impairment, and predation risk using a condition index

    USGS Publications Warehouse

    Campbell, Matthew D.; Patino, Reynaldo; Tolan, J.M.; Strauss, R.E.; Diamond, S.

    2009-01-01

    The sublethal effects of simulated capture of red snapper (Lutjanus campechanus) were analysed using physiological responses, condition indexing, and performance variables. Simulated catch-and-release fishing included combinations of depth of capture and thermocline exposure reflective of environmental conditions experienced in the Gulf of Mexico. Frequency of occurrence of barotrauma and lack of reflex response exhibited considerable individual variation. When combined into a single condition or impairment index, individual variation was reduced, and impairment showed significant increases as depth increased and with the addition of thermocline exposure. Performance variables, such as burst swimming speed (BSS) and simulated predator approach distance (AD), were also significantly different by depth. BSSs and predator ADs decreased with increasing depth, were lowest immediately after release, and were affected for up to 15 min, with longer recovery times required as depth increased. The impairment score developed was positively correlated with cortisol concentration and negatively correlated with both BSS and simulated predator AD. The impairment index proved to be an efficient method to estimate the overall impairment of red snapper in the laboratory simulations of capture and shows promise for use in field conditions, to estimate release mortality and vulnerability to predation.

  13. Algorithms for Learning Preferences for Sets of Objects

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; Eaton, Eric

    2010-01-01

    A method is being developed that provides for an artificial-intelligence system to learn a user's preferences for sets of objects and to thereafter automatically select subsets of objects according to those preferences. The method was originally intended to enable automated selection, from among large sets of images acquired by instruments aboard spacecraft, of image subsets considered to be scientifically valuable enough to justify use of limited communication resources for transmission to Earth. The method is also applicable to other sets of objects: examples of sets of objects considered in the development of the method include food menus, radio-station music playlists, and assortments of colored blocks for creating mosaics. The method does not require the user to perform the often-difficult task of quantitatively specifying preferences; instead, the user provides examples of preferred sets of objects. This method goes beyond related prior artificial-intelligence methods for learning which individual items are preferred by the user: this method supports a concept of setbased preferences, which include not only preferences for individual items but also preferences regarding types and degrees of diversity of items in a set. Consideration of diversity in this method involves recognition that members of a set may interact with each other in the sense that when considered together, they may be regarded as being complementary, redundant, or incompatible to various degrees. The effects of such interactions are loosely summarized in the term portfolio effect. The learning method relies on a preference representation language, denoted DD-PREF, to express set-based preferences. In DD-PREF, a preference is represented by a tuple that includes quality (depth) functions to estimate how desired a specific value is, weights for each feature preference, the desired diversity of feature values, and the relative importance of diversity versus depth. The system applies statistical concepts to estimate quantitative measures of the user s preferences from training examples (preferred subsets) specified by the user. Once preferences have been learned, the system uses those preferences to select preferred subsets from new sets. The method was found to be viable when tested in computational experiments on menus, music playlists, and rover images. Contemplated future development efforts include further tests on more diverse sets and development of a sub-method for (a) estimating the parameter that represents the relative importance of diversity versus depth, and (b) incorporating background knowledge about the nature of quality functions, which are special functions that specify depth preferences for features.

  14. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  15. Estimated damage from the Cascadia Subduction Zone tsunami: A model comparisons using fragility curves

    NASA Astrophysics Data System (ADS)

    Wiebe, D. M.; Cox, D. T.; Chen, Y.; Weber, B. A.; Chen, Y.

    2012-12-01

    Building damage from a hypothetical Cascadia Subduction Zone tsunami was estimated using two methods and applied at the community scale. The first method applies proposed guidelines for a new ASCE 7 standard to calculate the flow depth, flow velocity, and momentum flux from a known runup limit and estimate of the total tsunami energy at the shoreline. This procedure is based on a potential energy budget, uses the energy grade line, and accounts for frictional losses. The second method utilized numerical model results from previous studies to determine maximum flow depth, velocity, and momentum flux throughout the inundation zone. The towns of Seaside and Canon Beach, Oregon, were selected for analysis due to the availability of existing data from previously published works. Fragility curves, based on the hydrodynamic features of the tsunami flow (inundation depth, flow velocity, and momentum flux) and proposed design standards from ASCE 7 were used to estimate the probability of damage to structures located within the inundations zone. The analysis proceeded at the parcel level, using tax-lot data to identify construction type (wood, steel, and reinforced-concrete) and age, which was used as a performance measure when applying the fragility curves and design standards. The overall probability of damage to civil buildings was integrated for comparison between the two methods, and also analyzed spatially for damage patterns, which could be controlled by local bathymetric features. The two methods were compared to assess the sensitivity of the results to the uncertainty in the input hydrodynamic conditions and fragility curves, and the potential advantages of each method discussed. On-going work includes coupling the results of building damage and vulnerability to an economic input output model. This model assesses trade between business sectors located inside and outside the induction zone, and is used to measure the impact to the regional economy. Results highlight critical businesses sectors and infrastructure critical to the economic recovery effort, which could be retrofitted or relocated to survive the event. The results of this study improve community understanding of the tsunami hazard for civil buildings.

  16. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  17. Leucosome distribution in migmatitic paragneisses and orthogneisses: A record of self-organized melt migration and entrapment in a heterogeneous partially-molten crust

    NASA Astrophysics Data System (ADS)

    Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  18. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  19. Three-dimensional geostatistical inversion of flowmeter and pumping test data.

    PubMed

    Li, Wei; Englert, Andreas; Cirpka, Olaf A; Vereecken, Harry

    2008-01-01

    We jointly invert field data of flowmeter and multiple pumping tests in fully screened wells to estimate hydraulic conductivity using a geostatistical method. We use the steady-state drawdowns of pumping tests and the discharge profiles of flowmeter tests as our data in the inference. The discharge profiles need not be converted to absolute hydraulic conductivities. Consequently, we do not need measurements of depth-averaged hydraulic conductivity at well locations. The flowmeter profiles contain information about relative vertical distributions of hydraulic conductivity, while drawdown measurements of pumping tests provide information about horizontal fluctuation of the depth-averaged hydraulic conductivity. We apply the method to data obtained at the Krauthausen test site of the Forschungszentrum Jülich, Germany. The resulting estimate of our joint three-dimensional (3D) geostatistical inversion shows an improved 3D structure in comparison to the inversion of pumping test data only.

  20. Error Mitigation for Short-Depth Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  1. Characterization of highly multiplexed monolithic PET / gamma camera detector modules.

    PubMed

    Pierce, L A; Pedemonte, S; DeWitt, D; MacDonald, L; Hunter, W C J; Van Leemput, K; Miyaoka, R

    2018-03-29

    PET detectors use signal multiplexing to reduce the total number of electronics channels needed to cover a given area. Using measured thin-beam calibration data, we tested a principal component based multiplexing scheme for scintillation detectors. The highly-multiplexed detector signal is no longer amenable to standard calibration methodologies. In this study we report results of a prototype multiplexing circuit, and present a new method for calibrating the detector module with multiplexed data. A [Formula: see text] mm 3 LYSO scintillation crystal was affixed to a position-sensitive photomultiplier tube with [Formula: see text] position-outputs and one channel that is the sum of the other 64. The 65-channel signal was multiplexed in a resistive circuit, with 65:5 or 65:7 multiplexing. A 0.9 mm beam of 511 keV photons was scanned across the face of the crystal in a 1.52 mm grid pattern in order to characterize the detector response. New methods are developed to reject scattered events and perform depth-estimation to characterize the detector response of the calibration data. Photon interaction position estimation of the testing data was performed using a Gaussian Maximum Likelihood estimator and the resolution and scatter-rejection capabilities of the detector were analyzed. We found that using a 7-channel multiplexing scheme (65:7 compression ratio) with 1.67 mm depth bins had the best performance with a beam-contour of 1.2 mm FWHM (from the 0.9 mm beam) near the center of the crystal and 1.9 mm FWHM near the edge of the crystal. The positioned events followed the expected Beer-Lambert depth distribution. The proposed calibration and positioning method exhibited a scattered photon rejection rate that was a 55% improvement over the summed signal energy-windowing method.

  2. Estimating soil matric potential in Owens Valley, California

    USGS Publications Warehouse

    Sorenson, Stephen K.; Miller, R.F.; Welch, M.R.; Groeneveld, D.P.; Branson, F.A.

    1988-01-01

    Much of the floor of the Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first was the filter-paper method, which uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base 10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1 m depths derived by using the hand auger and filter paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter paper method could be obtained 90 to 95% of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures. (Lantz-PTT)

  3. Earthquake source parameters determined by the SAFOD Pilot Hole seismic array

    USGS Publications Warehouse

    Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.

    2004-01-01

    We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.

  4. An improved method for predicting the evolution of the characteristic parameters of an information system

    NASA Astrophysics Data System (ADS)

    Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.

    2018-03-01

    The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.

  5. Body Weight Estimation for Dose-Finding and Health Monitoring of Lying, Standing and Walking Patients Based on RGB-D Data

    PubMed Central

    May, Stefan

    2018-01-01

    This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients. PMID:29695098

  6. Body Weight Estimation for Dose-Finding and Health Monitoring of Lying, Standing and Walking Patients Based on RGB-D Data.

    PubMed

    Pfitzner, Christian; May, Stefan; Nüchter, Andreas

    2018-04-24

    This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients.

  7. Temperature regime and water/hydroxyl behavior in the crater Boguslawsky on the Moon

    NASA Astrophysics Data System (ADS)

    Wöhler, Christian; Grumpe, Arne; Berezhnoy, Alexey A.; Feoktistova, Ekaterina A.; Evdokimova, Nadezhda A.; Kapoor, Karan; Shevchenko, Vladislav V.

    2017-03-01

    In this work we examine the lunar crater Boguslawsky as a typical region of the illuminated southern lunar highlands with regard to its temperature regime and the behavior of the depth of the water/hydroxyl-related spectral absorption band near 3 μm wavelength. For estimating the surface temperature, we compare two different methods, the first of which is based on raytracing and the simulation of heat diffusion in the upper regolith layer, while the second relies on the thermal equilibrium assumption and uses Moon Mineralogy Mapper (M³) spectral reflectance data for estimating the wavelength-dependent thermal emissivity. A method for taking into account the surface roughness in the estimation of the surface temperature is proposed. Both methods yield consistent results that coincide within a few K. By constructing a map of the maximal surface temperatures and comparing with the volatility temperatures of Hg, S, Na, Mg, and Ca, we determine regions in which these volatile species might form stable deposits. Based on M³ data of the crater Boguslawsky acquired at different times of the lunar day, it is found that the average OH absorption depth is higher in the morning than at midday. In the morning a dependence of the OH absorption depth on the local surface temperature is observed, which is no more apparent at midday. This suggests that water/OH accumulates on the surface during the lunar night and largely disappears during the first half of the lunar day. We furthermore model the time dependence of the OH fraction remaining on the surface after having been exposed to the temporally integrated solar flux. In the morning, the OH absorption depth is not correlated with the remaining fraction of OH-containing species, indicating that the removal of water and/or OH-bearing species is mainly due to thermal evaporation after sunrise. In contrast, at midday the OH absorption depth increases with increasing remaining fraction of OH-containing species, suggesting photolysis by solar photons as the main mechanism for removal of the remaining OH-containing species later in the lunar day.

  8. [Research on Oil Sands Spectral Characteristics and Oil Content by Remote Sensing Estimation].

    PubMed

    You, Jin-feng; Xing, Li-xin; Pan, Jun; Shan, Xuan-long; Liang, Li-heng; Fan, Rui-xue

    2015-04-01

    Visible and near infrared spectroscopy is a proven technology to be widely used in identification and exploration of hydrocarbon energy sources with high spectral resolution for detail diagnostic absorption characteristics of hydrocarbon groups. The most prominent regions for hydrocarbon absorption bands are 1,740-1,780, 2,300-2,340 and 2,340-2,360 nm by the reflectance of oil sands samples. These spectral ranges are dominated by various C-H overlapping overtones and combination bands. Meanwhile, there is relatively weak even or no absorption characteristics in the region from 1,700 to 1,730 nm in the spectra of oil sands samples with low bitumen content. With the increase in oil content, in the spectral range of 1,700-1,730 nm the obvious hydrocarbon absorption begins to appear. The bitumen content is the critical parameter for oil sands reserves estimation. The absorption depth was used to depict the response intensity of the absorption bands controlled by first-order overtones and combinations of the various C-H stretching and bending fundamentals. According to the Pearson and partial correlation relationships of oil content and absorption depth dominated by hydrocarbon groups in 1,740-1,780, 2,300-2,340 and 2,340-2,360 nm wavelength range, the scheme of association mode was established between the intensity of spectral response and bitumen content, and then unary linear regression(ULR) and partial least squares regression (PLSR) methods were employed to model the equation between absorption depth attributed to various C-H bond and bitumen content. There were two calibration equations in which ULR method was employed to model the relationship between absorption depth near 2,350 nm region and bitumen content and PLSR method was developed to model the relationship between absorption depth of 1,758, 2,310, 2,350 nm regions and oil content. It turned out that the calibration models had good predictive ability and high robustness and they could provide the scientific basis for rapid estimation of oil content in oil sands in future.

  9. Calling depths of baleen whales from single sensor data: development of an autocorrelation method using multipath localization.

    PubMed

    Valtierra, Robert D; Glynn Holt, R; Cholewiak, Danielle; Van Parijs, Sofie M

    2013-09-01

    Multipath localization techniques have not previously been applied to baleen whale vocalizations due to difficulties in application to tonal vocalizations. Here it is shown that an autocorrelation method coupled with the direct reflected time difference of arrival localization technique can successfully resolve location information. A derivation was made to model the autocorrelation of a direct signal and its overlapping reflections to illustrate that an autocorrelation may be used to extract reflection information from longer duration signals containing a frequency sweep, such as some calls produced by baleen whales. An analysis was performed to characterize the difference in behavior of the autocorrelation when applied to call types with varying parameters (sweep rate, call duration). The method's feasibility was tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The method was then used to estimate the depth and range of a single North Atlantic right whale (Eubalaena glacialis) and humpback whale (Megaptera novaeangliae) from two separate experiments.

  10. Comparison of Decisions Quality of Heuristic Methods with Limited Depth-First Search Techniques in the Graph Shortest Path Problem

    NASA Astrophysics Data System (ADS)

    Vatutin, Eduard

    2017-12-01

    The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.

  11. Thin ice clouds in the Arctic: cloud optical depth and particle size retrieved from ground-based thermal infrared radiometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, Yann; Royer, Alain; O'Neill, Norman T.

    Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less

  12. Thin ice clouds in the Arctic: cloud optical depth and particle size retrieved from ground-based thermal infrared radiometry

    NASA Astrophysics Data System (ADS)

    Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; Turner, David D.; Eloranta, Edwin W.

    2017-06-01

    Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookup table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation (R2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21 µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.

  13. Thin ice clouds in the Arctic: cloud optical depth and particle size retrieved from ground-based thermal infrared radiometry

    DOE PAGES

    Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; ...

    2017-06-09

    Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less

  14. A Comprehensive Snow Density Model for Integrating Lidar-Derived Snow Depth Data into Spatial Snow Modeling

    NASA Astrophysics Data System (ADS)

    Marks, D. G.; Kormos, P.; Johnson, M.; Bormann, K. J.; Hedrick, A. R.; Havens, S.; Robertson, M.; Painter, T. H.

    2017-12-01

    Lidar-derived snow depths when combined with modeled or estimated snow density can provide reliable estimates of the distribution of SWE over large mountain areas. Application of this approach is transforming western snow hydrology. We present a comprehensive approach toward modeling bulk snow density that is reliable over a vast range of weather and snow conditions. The method is applied and evaluated over mountainous regions of California, Idaho, Oregon and Colorado in the western US. Simulated and measured snow density are compared at fourteen validation sites across the western US where measurements of snow mass (SWE) and depth are co-located. Fitting statistics for ten sites from three mountain catchments (two in Idaho, one in California) show an average Nash-Sutcliff model efficiency coefficient of 0.83, and mean bias of 4 kg m-3. Results illustrate issues associated with monitoring snow depth and SWE and show the effectiveness of the model, with a small mean bias across a range of snow and climate conditions in the west.

  15. High-speed autofocusing of a cell using diffraction pattern

    NASA Astrophysics Data System (ADS)

    Oku, Hiromasa; Ishikawa, Masatoshi; Theodorus; Hashimoto, Koichi

    2006-05-01

    This paper proposes a new autofocusing method for observing cells under a transmission illumination. The focusing method uses a quick and simple focus estimation technique termed “depth from diffraction,” which is based on a diffraction pattern in a defocused image of a biological specimen. Since this method can estimate the focal position of the specimen from only a single defocused image, it can easily realize high-speed autofocusing. To demonstrate the method, it was applied to continuous focus tracking of a swimming paramecium, in combination with two-dimensional position tracking. Three-dimensional tracking of the paramecium for 70 s was successfully demonstrated.

  16. Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona

    USGS Publications Warehouse

    Bultman, Mark W.

    2015-01-01

    Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.

  17. A new approach for estimating the Jupiter and Saturn gravity fields using Juno and Cassini measurements, trajectory estimation analysis, and a dynamical wind model optimization

    NASA Astrophysics Data System (ADS)

    Galanti, Eli; Durante, Daniele; Iess, Luciano; Kaspi, Yohai

    2017-04-01

    The ongoing Juno spacecraft measurements are improving our knowledge of Jupiter's gravity field. Similarly, the Cassini Grand Finale will improve the gravity estimate of Saturn. The analysis of the Juno and Cassini Doppler data will provide a very accurate reconstruction of spacial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity fields of Jupiter and Saturn, additional information needs to be incorporated into the analysis, especially with regards to the planets' wind structures. In this work we propose a new iterative approach for the estimation of Jupiter and Saturn gravity fields, using simulated measurements, a trajectory estimation model, and an adjoint based inverse thermal wind model. Beginning with an artificial gravitational field, the trajectory estimation model is used to obtain the gravitational moments. The solution from the trajectory model is then used as an initial guess for the thermal wind model, and together with an optimization method, the likely penetration depth of the winds is computed, and its uncertainty is evaluated. As a final step, the gravity harmonics solution from the thermal wind model is given back to the trajectory model, along with an estimate of their uncertainties, to be used as a priori for a new calculation of the gravity field. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that by using this method some of the gravitational moments are fitted better to the `observed' ones, mainly due to the added information from the dynamical model which includes the wind structure and its depth. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity moments estimated from the Juno and Cassini radio science experiments.

  18. G-CNV: A GPU-Based Tool for Preparing Data to Detect CNVs with Read-Depth Methods.

    PubMed

    Manconi, Andrea; Manca, Emanuele; Moscatelli, Marco; Gnocchi, Matteo; Orro, Alessandro; Armano, Giuliano; Milanesi, Luciano

    2015-01-01

    Copy number variations (CNVs) are the most prevalent types of structural variations (SVs) in the human genome and are involved in a wide range of common human diseases. Different computational methods have been devised to detect this type of SVs and to study how they are implicated in human diseases. Recently, computational methods based on high-throughput sequencing (HTS) are increasingly used. The majority of these methods focus on mapping short-read sequences generated from a donor against a reference genome to detect signatures distinctive of CNVs. In particular, read-depth based methods detect CNVs by analyzing genomic regions with significantly different read-depth from the other ones. The pipeline analysis of these methods consists of four main stages: (i) data preparation, (ii) data normalization, (iii) CNV regions identification, and (iv) copy number estimation. However, available tools do not support most of the operations required at the first two stages of this pipeline. Typically, they start the analysis by building the read-depth signal from pre-processed alignments. Therefore, third-party tools must be used to perform most of the preliminary operations required to build the read-depth signal. These data-intensive operations can be efficiently parallelized on graphics processing units (GPUs). In this article, we present G-CNV, a GPU-based tool devised to perform the common operations required at the first two stages of the analysis pipeline. G-CNV is able to filter low-quality read sequences, to mask low-quality nucleotides, to remove adapter sequences, to remove duplicated read sequences, to map the short-reads, to resolve multiple mapping ambiguities, to build the read-depth signal, and to normalize it. G-CNV can be efficiently used as a third-party tool able to prepare data for the subsequent read-depth signal generation and analysis. Moreover, it can also be integrated in CNV detection tools to generate read-depth signals.

  19. Estimation of slip distribution using an inverse method based on spectral decomposition of Green's function utilizing Global Positioning System (GPS) data

    NASA Astrophysics Data System (ADS)

    Jin, Honglin; Kato, Teruyuki; Hori, Muneo

    2007-07-01

    An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.

  20. A pose estimation method for unmanned ground vehicles in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Tamjidi, Amirhossein; Ye, Cang

    2012-06-01

    This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.

  1. Characterizing the microcirculation of atopic dermatitis using angiographic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Byers, R. A.; Maiti, R.; Danby, S. G.; Pang, E. J.; Mitchell, B.; Carré, M. J.; Lewis, R.; Cork, M. J.; Matcher, S. J.

    2017-02-01

    Background and Aim: With inflammatory skin conditions such as atopic dermatitis (AD), epidermal thickness is mediated by both pathological hyperplasia and atrophy such as that resulting from corticosteroid treatment. Such changes are likely to influence the depth and shape of the underlying microcirculation. Optical coherence tomography (OCT) provides a non-invasive view into the tissue, however structural measures of epidermal thickness are made challenging due to the lack of a delineated dermal-epidermal junction in AD patients. Instead, angiographic extensions to OCT may allow for direct measurement of vascular depth, potentially presenting a more robust method of estimating the degree of epidermal thickening. Methods and results: To investigate microcirculatory changes within AD patients, volumes of angiographic OCT data were collected from 5 healthy volunteers and compared to that of 5 AD patients. Test sites included the cubital and popliteal fossa, which are commonly affected by AD. Measurements of the capillary loop and superficial arteriolar plexus (SAP) depth were acquired and used to estimate the lower and upper bounds of the undulating basement membrane of the dermal-epidermal junction. Furthermore, quantitative parameters such as vessel density and diameter were derived from each dataset and compared between groups. Capillary loop depth increased slightly for AD patients at the poplitial fossa and SAP was found to be measurably deeper in AD patients at both sites, likely due to localized epidermal hyperplasia.

  2. In-Depth Analysis of the JACK Model.

    DOT National Transportation Integrated Search

    2009-04-30

    Recently, as part of a comprehensive analysis of budget and funding options, a TxDOT : special task force has examined the agencys current financial forecasting methods and has : developed a model designed to estimate future State Highway Fund rev...

  3. Depth inpainting by tensor voting.

    PubMed

    Kulkarni, Mandar; Rajagopalan, Ambasamudram N

    2013-06-01

    Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.

  4. Fracture characterization and fracture-permeability estimation at the underground research laboratory in southeastern Manitoba, Canada

    USGS Publications Warehouse

    Paillet, Frederick L.

    1988-01-01

    Various conventional geophysical well logs were obtained in conjunction with acoustic tube-wave amplitude and experimental heat-pulse flowmeter measurements in two deep boreholes in granitic rocks on the Canadian shield in southeastern Manitoba. The objective of this study is the development of measurement techniques and data processing methods for characterization of rock volumes that might be suitable for hosting a nuclear waste repository. One borehole, WRA1, intersected several major fracture zones, and was suitable for testing quantitative permeability estimation methods. The other borehole, URL13, appeared to intersect almost no permeable fractures; it was suitable for testing methods for the characterization of rocks of very small permeability and uniform thermo-mechanical properties in a potential repository horizon. Epithermal neutron , acoustic transit time, and single-point resistance logs provided useful, qualitative indications of fractures in the extensively fractured borehole, WRA1. A single-point log indicates both weathering and the degree of opening of a fracture-borehole intersection. All logs indicate the large intervals of mechanically and geochemically uniform, unfractured granite below depths of 300 m in the relatively unfractured borehole, URL13. Some indications of minor fracturing were identified in that borehole, with one possible fracture at a depth of about 914 m, producing a major acoustic waveform anomaly. Comparison of acoustic tube-wave attenuation with models of tube-wave attenuation in infinite fractures of given aperture provide permeability estimates ranging from equivalent single-fractured apertures of less than 0.01 mm to apertures of > 0.5 mm. One possible fracture anomaly in borehole URL13 at a depth of about 914 m corresponds with a thin mafic dike on the core where unusually large acoustic contrast may have produced the observed waveform anomaly. No indications of naturally occurring flow existed in borehole URL13; however, flowmeter measurements indicated flow at < 0.05 L/min from the upper fracture zones in borehole WRA1 to deeper fractures at depths below 800 m. (Author 's abstract)

  5. Structural Analysis of Ogygis Rupes Lobate Scarp on Mars.

    NASA Astrophysics Data System (ADS)

    Herrero-Gil, A.; Ruiz, J.; Romeo, I.; Egea-González, I.

    2016-12-01

    Ogygis Rupes is a 200 kilometers long lobate scarp, striking N30ºE, with approximately 2km of maximum structural relief. It is located in Aonia Terra, in the southern hemisphere of Mars near the northeast margin of Argyre impact basin. Similar to other large lobate scarps on Mercury or Mars, it shows a roughly arcuate to linear form, and an asymmetric cross section with a steeply rising scarp face and a gently declining back scarp. This asymmetry suggests that Ogygis Rupes is the topographic expression of a ESE-vergent thrust fault. By using the Mars Orbiter Laser Altimeter data and the Mars imagery available we have measure the horizontal shortening on impact craters cross-cut by this lobate scarp to obtain a minimum value for the horizontal offset of the underling fault. Two complementary methods were used to estimate fault geometry parameters as fault displacement, dip angle and depth of faulting: (i) analyzing topographic profiles together with the horizontal shortening estimations from cross-cut craters to create balanced cross sections on the basis of the thrust fault propagation folding [1]; (ii) using a forward mechanical dislocation method [2], which predicts fault geometry by comparing model outputs with real topography. The significant size of the fault underlying this lobate scarp suggests that its detachment is located at a main rheological change, for which we have obtained a preliminary depth value of around 30 kilometers by the methods listed above. Estimates of the depth of faulting in similar lobate scarps [3] have been associated to the depth of the brittle-ductile transition. [1] Suppe (1983), Am. J. Sci., 283, 648-721; Seeber and Sorlien (2000), Geol. Soc. Am. Bull., 112, 1067-1079. [2] Toda et al. (1998) JGR, 103, 24543-24565. [3] i.e. Schultz and Watters (2001) Geophys. Res. Lett., 28, 4659-4662; Ruiz et al. (2008) EPSL, 270, 1-12; Egea-Gonzalez et al. (2012) PSS, 60, 193-198; Mueller et al. (2014) EPSL, 408, 100-109.

  6. Depth of interaction decoding of a continuous crystal detector module.

    PubMed

    Ling, T; Lewellen, T K; Miyaoka, R S

    2007-04-21

    We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability.

  7. Assessment of surface runoff depth changes in S\\varǎţel River basin, Romania using GIS techniques

    NASA Astrophysics Data System (ADS)

    Romulus, Costache; Iulia, Fontanine; Ema, Corodescu

    2014-09-01

    S\\varǎţel River basin, which is located in Curvature Subcarpahian area, has been facing an obvious increase in frequency of hydrological risk phenomena, associated with torrential events, during the last years. This trend is highly related to the increase in frequency of the extreme climatic phenomena and to the land use changes. The present study is aimed to highlight the spatial and quantitative changes occurred in surface runoff depth in S\\varǎţel catchment, between 1990-2006. This purpose was reached by estimating the surface runoff depth assignable to the average annual rainfall, by means of SCS-CN method, which was integrated into the GIS environment through the ArcCN-Runoff extension, for ArcGIS 10.1. In order to compute the surface runoff depth, by CN method, the land cover and the hydrological soil classes were introduced as vector (polygon data), while the curve number and the average annual rainfall were introduced as tables. After spatially modeling the surface runoff depth for the two years, the 1990 raster dataset was subtracted from the 2006 raster dataset, in order to highlight the changes in surface runoff depth.

  8. Clinical usefulness of endoscopic ultrasonography for the evaluation of ulcerative colitis-associated tumors

    PubMed Central

    Kobayashi, Kiyonori; Kawagishi, Kana; Ooka, Shouhei; Yokoyama, Kaoru; Sada, Miwa; Koizumi, Wasaburo

    2015-01-01

    AIM: To evaluate the clinical usefulness of endoscopic ultrasonography (EUS) for the diagnosis of the invasion depth of ulcerative colitis-associated tumors. METHODS: The study group comprised 13 patients with 16 ulcerative colitis (UC)-associated tumors for which the depth of invasion was preoperatively estimated by EUS. The lesions were then resected endoscopically or by surgical colectomy and were examined histopathologically. The mean age of the subjects was 48.2 ± 17.1 years, and the mean duration of UC was 15.8 ± 8.3 years. Two lesions were treated by endoscopic resection and the other 14 lesions by surgical colectomy. The depth of invasion of UC-associated tumors was estimated by EUS using an ultrasonic probe and was evaluated on the basis of the deepest layer with narrowing or rupture of the colonic wall. RESULTS: The diagnosis of UC-associated tumors by EUS was carcinoma for 13 lesions and dysplasia for 3 lesions. The invasion depth of the carcinomas was intramucosal for 8 lesions, submucosal for 2, the muscularis propria for 2, and subserosal for 1. Eleven (69%) of the 16 lesions arose in the rectum. The macroscopic appearance was the laterally spreading tumor-non-granular type for 4 lesions, sessile type for 4, laterally spreading tumor-granular type for 3, semi-pedunculated type (Isp) for 2, type 1 for 2, and type 3 for 1. The depth of invasion was correctly estimated by EUS for 15 lesions (94%) but was misdiagnosed as intramucosal for 1 carcinoma with high-grade submucosal invasion. The 2 lesions treated by endoscopic resection were intramucosal carcinoma and dysplasia, and both were diagnosed as intramucosal lesions by EUS. CONCLUSION: EUS provides a good estimation of the invasion depth of UC-associated tumors and may thus facilitate the selection of treatment. PMID:25759538

  9. Chapter 12: Survey Design and Implementation for Estimating Gross Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Baumgartner, Robert

    This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savingsmore » from energy efficiency programs.« less

  10. Multiscale analysis of potential fields by a ridge consistency criterion: the reconstruction of the Bishop basement

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.; Cascone, L.

    2012-01-01

    We use a multiscale approach as a semi-automated interpreting tool of potential fields. The depth to the source and the structural index are estimated in two steps: first the depth to the source, as the intersection of the field ridges (lines built joining the extrema of the field at various altitudes) and secondly, the structural index by the scale function. We introduce a new criterion, called 'ridge consistency' in this strategy. The criterion is based on the principle that the structural index estimations on all the ridges converging towards the same source should be consistent. If these estimates are significantly different, field differentiation is used to lessen the interference effects from nearby sources or regional fields, to obtain a consistent set of estimates. In our multiscale framework, vertical differentiation is naturally joint to the low-pass filtering properties of the upward continuation, so is a stable process. Before applying our criterion, we studied carefully the errors on upward continuation caused by the finite size of the survey area. To this end, we analysed the complex magnetic synthetic case, known as Bishop model, and evaluated the best extrapolation algorithm and the optimal width of the area extension, needed to obtain accurate upward continuation. Afterwards, we applied the method to the depth estimation of the whole Bishop basement bathymetry. The result is a good reconstruction of the complex basement and of the shape properties of the source at the estimated points.

  11. Precipitable water vapor and 212 GHz atmospheric optical depth correlation at El Leoncito site

    NASA Astrophysics Data System (ADS)

    Cassiano, Marta M.; Cornejo Espinoza, Deysi; Raulin, Jean-Pierre; Giménez de Castro, Carlos G.

    2018-03-01

    Time series of precipitable water vapor (PWV) and 212 GHz atmospheric optical depth were obtained in CASLEO (Complejo Astronómico El Leoncito), at El Leoncito site, Argentinean Andes, for the period of 2011-2013. The 212 GHz atmospheric optical depth data were derived from measurements by the Solar Submillimeter Telescope (SST) and the PWV data were obtained by the AERONET CASLEO station. The correlation between PWV and 212 GHz optical depth was analyzed for the whole period, when both parameters were simultaneously available. A very significant correlation was observed. Similar correlation was found when data were analyzed year by year. The results indicate that the correlation of PWV versus 212 GHz optical depth could be used as an indirect estimation method for PWV, when direct measurements are not available.

  12. A quantile count model of water depth constraints on Cape Sable seaside sparrows

    USGS Publications Warehouse

    Cade, B.S.; Dong, Q.

    2008-01-01

    1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.

  13. Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G. P.

    Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.

  14. Estimation of potential bridge scour at bridges on state routes in South Dakota, 2003-07

    USGS Publications Warehouse

    Thompson, Ryan F.; Fosness, Ryan L.

    2008-01-01

    Flowing water can erode (scour) soils and cause structural failure of a bridge by exposing or undermining bridge foundations (abutments and piers). A rapid scour-estimation technique, known as the level-1.5 method and developed by the U.S. Geological Survey, was used to evaluate potential scour at bridges in South Dakota in a study conducted in cooperation with the South Dakota Department of Transportation. This method was used during 2003-07 to estimate scour for the 100-year and 500-year floods at 734 selected bridges managed by the South Dakota Department of Transportation on State routes in South Dakota. Scour depths and other parameters estimated from the level-1.5 analyses are presented in tabular form. Estimates of potential contraction scour at the 734 bridges ranged from 0 to 33.9 feet for the 100-year flood and from 0 to 35.8 feet for the 500-year flood. Abutment scour ranged from 0 to 36.9 feet for the 100-year flood and from 0 to 45.9 feet for the 500-year flood. Pier scour ranged from 0 to 30.8 feet for the 100-year flood and from 0 to 30.7 feet for the 500-year flood. The scour depths estimated by using the level-1.5 method can be used by the South Dakota Department of Transportation and others to identify bridges that may be susceptible to scour. Scour at 19 selected bridges also was estimated by using the level-2 method. Estimates of contraction, abutment, and pier scour calculated by using the level-1.5 and level-2 methods are presented in tabular and graphical formats. Compared to level-2 scour estimates, the level-1.5 method generally overestimated scour as designed, or in a few cases slightly underestimated scour. Results of the level-2 analyses were used to develop regression equations for change in head and average velocity through the bridge opening. These regression equations derived from South Dakota data are compared to similar regression equations derived from Montana and Colorado data. Future level-1.5 scour investigations in South Dakota may benefit from the use of these South Dakota-specific regression equations for estimating change in stream head and average velocity at the bridge.

  15. Non-contact monitoring during laser surgery by measuring the incision depth with air-coupled transducers

    NASA Astrophysics Data System (ADS)

    Oyaga Landa, Francisco Javier; Deán-Ben, Xosé Luís.; Montero de Espinosa, Francisco; Razansky, Daniel

    2017-03-01

    Lack of haptic feedback during laser surgery hampers controlling the incision depth, leading to a high risk of undesired tissue damage. Here we present a new feedback sensing method that accomplishes non-contact realtime monitoring of laser ablation procedures by detecting shock waves emanating from the ablation spot with air-coupled transducers. Experiments in soft and hard tissue samples attained high reproducibity in real-time depth estimation of the laser-induced cuts. The advantages derived from the non-contact nature of the suggested monitoring approach are expected to greatly promote the general applicability of laser-based surgeries.

  16. Real-time depth camera tracking with geometrically stable weight algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming

    2017-03-01

    We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.

  17. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  18. Estimating Oceanic Primary Production Using Vertical Irradiance and Chlorophyll Profiles from Ocean Gliders in the North Atlantic.

    PubMed

    Hemsley, Victoria S; Smyth, Timothy J; Martin, Adrian P; Frajka-Williams, Eleanor; Thompson, Andrew F; Damerell, Gillian; Painter, Stuart C

    2015-10-06

    An autonomous underwater vehicle (Seaglider) has been used to estimate marine primary production (PP) using a combination of irradiance and fluorescence vertical profiles. This method provides estimates for depth-resolved and temporally evolving PP on fine spatial scales in the absence of ship-based calibrations. We describe techniques to correct for known issues associated with long autonomous deployments such as sensor calibration drift and fluorescence quenching. Comparisons were made between the Seaglider, stable isotope ((13)C), and satellite estimates of PP. The Seaglider-based PP estimates were comparable to both satellite estimates and stable isotope measurements.

  19. Geophysical surveying in the Sacramento Delta for earthquake hazard assessment and measurement of peat thickness

    NASA Astrophysics Data System (ADS)

    Craig, M. S.; Kundariya, N.; Hayashi, K.; Srinivas, A.; Burnham, M.; Oikawa, P.

    2017-12-01

    Near surface geophysical surveys were conducted in the Sacramento-San Joaquin Delta for earthquake hazard assessment and to provide estimates of peat thickness for use in carbon models. Delta islands have experienced 3-8 meters of subsidence during the past century due to oxidation and compaction of peat. Projected sea level rise over the next century will contribute to an ongoing landward shift of the freshwater-saltwater interface, and increase the risk of flooding due to levee failure or overtopping. Seismic shear wave velocity (VS) was measured in the upper 30 meters to determine Uniform Building Code (UBC)/ National Earthquake Hazard Reduction Program (NEHRP) site class. Both seismic and ground penetrating radar (GPR) methods were employed to estimate peat thickness. Seismic surface wave surveys were conducted at eight sites on three islands and GPR surveys were conducted at two of the sites. Combined with sites surveyed in 2015, the new work brings the total number of sites surveyed in the Delta to twenty.Soil boreholes were made at several locations using a hand auger, and peat thickness ranged from 2.1 to 5.5 meters. Seismic surveys were conducted using the multichannel analysis of surface wave (MASW) method and the microtremor array method (MAM). On Bouldin Island, VS of the surficial peat layer was 32 m/s at a site with pure peat and 63 m/s at a site peat with higher clay and silt content. Velocities at these sites reached a similar value, about 125 m/s, at a depth of 10 m. GPR surveys were performed at two sites on Sherman Island using 100 MHz antennas, and indicated the base of the peat layer at a depth of about 4 meters, consistent with nearby auger holes.The results of this work include VS depth profiles and UBC/NEHRP site classifications. Seismic and GPR methods may be used in a complementary fashion to estimate peat thickness. The seismic surface wave method is a relatively robust method and more effective than GPR in many areas with high clay content or where surface sediments have been disturbed by human activities. GPR does however provide significantly higher resolution and better depth control in areas with suitable recording conditions.

  20. Shear velocity estimates on the inner shelf off Grays Harbor, Washington, USA

    USGS Publications Warehouse

    Sherwood, C.R.; Lacy, J.R.; Voulgaris, G.

    2006-01-01

    Shear velocity was estimated from current measurements near the bottom off Grays Harbor, Washington between May 4 and June 6, 2001 under mostly wave-dominated conditions. A downward-looking pulse-coherent acoustic Doppler profiler (PCADP) and two acoustic-Doppler velocimeters (field version; ADVFs) were deployed on a tripod at 9-m water depth. Measurements from these instruments were used to estimate shear velocity with (1) a modified eddy-correlation (EC) technique, (2) the log-profile (LP) method, and (3) a dissipation-rate method. Although values produced by the three methods agreed reasonably well (within their broad ranges of uncertainty), there were important systematic differences. Estimates from the EC method were generally lowest, followed by those from the inertial-dissipation method. The LP method produced the highest values and the greatest scatter. We show that these results are consistent with boundary-layer theory when sediment-induced stratification is present. The EC method provides the most fundamental estimate of kinematic stress near the bottom, and stratification causes the LP method to overestimate bottom stress. These results remind us that the methods are not equivalent and that comparison among sites and with models should be made carefully. ?? 2006 Elsevier Ltd. All rights reserved.

  1. A Texture-Polarization Method for Estimating Convective/Stratiform Precipitation Area Coverage from Passive Microwave Radiometer Data

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Hong, Ye; Kummerow, Christian D.; Turk, Joseph; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Observational and modeling studies have described the relationships between convective/stratiform rain proportion and the vertical distributions of vertical motion, latent heating, and moistening in mesoscale convective systems. Therefore, remote sensing techniques which can quantify the relative areal proportion of convective and stratiform, rainfall can provide useful information regarding the dynamic and thermodynamic processes in these systems. In the present study, two methods for deducing the convective/stratiform areal extent of precipitation from satellite passive microwave radiometer measurements are combined to yield an improved method. If sufficient microwave scattering by ice-phase precipitating hydrometeors is detected, the method relies mainly on the degree of polarization in oblique-view, 85.5 GHz radiances to estimate the area fraction of convective rain within the radiometer footprint. In situations where ice scattering is minimal, the method draws mostly on texture information in radiometer imagery at lower microwave frequencies to estimate the convective area fraction. Based upon observations of ten convective systems over ocean and nine systems over land, instantaneous 0.5 degree resolution estimates of convective area fraction from the Tropical Rainfall Measuring Mission Microwave Imager (TRMM TMI) are compared to nearly coincident estimates from the TRMM Precipitation Radar (TRMM PR). The TMI convective area fraction estimates are slightly low-biased with respect to the PR, with TMI-PR correlations of 0.78 and 0.84 over ocean and land backgrounds, respectively. TMI monthly-average convective area percentages in the tropics and subtropics from February 1998 exhibit the greatest values along the ITCZ and in continental regions of the summer (southern) hemisphere. Although convective area percentages. from the TMI are systematically lower than those from the PR, monthly rain patterns derived from the TMI and PR rain algorithms are very similar. TMI rain depths are significantly higher than corresponding rain depths from the PR in the ITCZ, but are similar in magnitude elsewhere.

  2. Interaction between aerosol and the planetary boundary layer depth at sites in the US and China

    NASA Astrophysics Data System (ADS)

    Sawyer, V. R.

    2015-12-01

    The depth of the planetary boundary layer (PBL) defines a changing volume into which pollutants from the surface can disperse, which affects weather, surface air quality and radiative forcing in the lower troposphere. Model simulations have also shown that aerosol within the PBL heats the layer at the expense of the surface, changing the stability profile and therefore also the development of the PBL itself: aerosol radiative forcing within the PBL suppresses surface convection and causes shallower PBLs. However, the effect has been difficult to detect in observations. The most intensive radiosonde measurements have a temporal resolution too coarse to detect the full diurnal variability of the PBL, but remote sensing such as lidar can fill in the gaps. Using a method that combines two common PBL detection algorithms (wavelet covariance and iterative curve-fitting) PBL depth retrievals from micropulse lidar (MPL) at the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site are compared to MPL-derived PBL depths from a multiyear lidar deployment at the Hefei Radiation Observatory (HeRO). With aerosol optical depth (AOD) measurements from both sites, it can be shown that a weak inverse relationship exists between AOD and daytime PBL depth. This relationship is stronger at the more polluted HeRO site than at SGP. Figure: Mean daily AOD vs. mean daily PBL depth, with the Nadaraya-Watson estimator overlaid on the kernel density estimate. Left, SGP; right, HeRO.

  3. Measurement of the Muon Production Depths at the Pierre Auger Observatory

    DOE PAGES

    Collica, Laura

    2016-09-08

    The muon content of extensive air showers is an observable sensitive to the primary composition and to the hadronic interaction properties. The Pierre Auger Observatory uses water-Cherenkov detectors to measure particle densities at the ground and therefore is sensitive to the muon content of air showers. We present here a method which allows us to estimate the muon production depths by exploiting the measurement of the muon arrival times at the ground recorded with the Surface Detector of the Pierre Auger Observatory. The analysis is performed in a large range of zenith angles, thanks to the capability of estimating and subtracting the electromagnetic component, and for energies betweenmore » $$10^{19.2}$$ and $$10^{20}$$ eV.« less

  4. A Nonlinear Inversion Approach to Map the Magnetic Basement: A Case Study from Central India Using Aeromagnetic Data

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Bansal, A. R.; Anand, S. P.; Rao, V. K.; Singh, U. K.

    2016-12-01

    The central India region is having complex geology covering various geological units e.g., Precambrian Bastar Craton (including Proterozoic Chhattisgarh Basin, granitic intrusions etc.) and Eastern Ghat Mobile Belt, Gondwana Godavari and Mahanadi Grabens, Late Cretaceous Deccan Traps etc. The central India is well covered by reconnaissance scale aeromagnetic data. We analyzed this data for mapping the basement by dividing into143 overlapping blocks of 100×100km using least square nonlinear inversion method for fractal distribution of sources. The scaling exponents and depth values are optimized using grid search method. We interpreted estimated depths of anomalous sources as magnetic basement and shallow anomalous magnetic sources. The shallow magnetic anomalies are found to vary from 1 to 3km whereas magnetic basement depths are found to vary from 2km to 7km. The shallowest basement depth of 2km found corresponding to Kanker granites a part of Bastar Craton whereas deepest basement depth of 7km is associated with Godavari Graben and south eastern part of Eastern Ghat Mobile Belts near the Parvatipuram Bobbili fault. The variation of magnetic basement, shallow depths and scaling exponent in the region indicate complex tectonic, heterogeneity and intrusive bodies at different depths which is due to different tectonic processes in the region. The detailed basement depth of central India is presented in this study.

  5. Representativeness of the ground observational sites and up-scaling of the point soil moisture measurements

    NASA Astrophysics Data System (ADS)

    Chen, Jinlei; Wen, Jun; Tian, Hui

    2016-02-01

    Soil moisture plays an increasingly important role in the cycle of energy-water exchange, climate change, and hydrologic processes. It is usually measured at a point site, but regional soil moisture is essential for validating remote sensing products and numerical modeling results. In the study reported in this paper, the minimal number of required sites (NRS) for establishing a research observational network and the representative single sites for regional soil moisture estimation are discussed using the soil moisture data derived from the ;Maqu soil moisture observational network; (101°40‧-102°40‧E, 33°30‧-35°45‧N), which is supported by Chinese Academy of Science. Furthermore, the best up-scaling method suitable for this network has been studied by evaluating four commonly used up-scaling methods. The results showed that (1) Under a given accuracy requirement R ⩾ 0.99, RMSD ⩽ 0.02 m3/m3, NRS at both 5 and 10 cm depth is 10. (2) Representativeness of the sites has been validated by time stability analysis (TSA), time sliding correlation analysis (TSCA) and optimal combination of sites (OCS). NST01 is the most representative site at 5 cm depth for the first two methods; NST07 and NST02 are the most representative sites at 10 cm depth. The optimum combination sites at 5 cm depth are NST01, NST02, and NST07. NST05, NST08, and NST13 are the best group at 10 cm depth. (3) Linear fitting, compared with other three methods, is the best up-scaling method for all types of representative sites obtained above, and linear regression equations between a single site and regional soil moisture are established hereafter. ;Single site; obtained by OCS has the greatest up-scaling effect, and TSCA takes the second place. (4) Linear fitting equations show good practicability in estimating the variation of regional soil moisture from July 3, 2013 to July 3, 2014, when a large number of observed soil moisture data are lost.

  6. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. Application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.

  7. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.

  8. I Environmental DNA sampling is more sensitive than a traditional survey technique for detecting an aquatic invader.

    PubMed

    Smart, Adam S; Tingley, Reid; Weeks, Andrew R; van Rooyen, Anthony R; McCarthy, Michael A

    2015-10-01

    Effective management of alien species requires detecting populations in the early stages of invasion. Environmental DNA (eDNA) sampling can detect aquatic species at relatively low densities, but few studies have directly compared detection probabilities of eDNA sampling with those of traditional sampling methods. We compare the ability of a traditional sampling technique (bottle trapping) and eDNA to detect a recently established invader, the smooth newt Lissotriton vulgaris vulgaris, at seven field sites in Melbourne, Australia. Over a four-month period, per-trap detection probabilities ranged from 0.01 to 0.26 among sites where L. v. vulgaris was detected, whereas per-sample eDNA estimates were much higher (0.29-1.0). Detection probabilities of both methods varied temporally (across days and months), but temporal variation appeared to be uncorrelated between methods. Only estimates of spatial variation were strongly correlated across the two sampling techniques. Environmental variables (water depth, rainfall, ambient temperature) were not clearly correlated with detection probabilities estimated via trapping, whereas eDNA detection probabilities were negatively correlated with water depth, possibly reflecting higher eDNA concentrations at lower water levels. Our findings demonstrate that eDNA sampling can be an order of magnitude more sensitive than traditional methods, and illustrate that traditional- and eDNA-based surveys can provide independent information on species distributions when occupancy surveys are conducted over short timescales.

  9. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  10. Parameter estimation of brain tumors using intraoperative thermal imaging based on artificial tactile sensing in conjunction with artificial neural network

    NASA Astrophysics Data System (ADS)

    Sadeghi-Goughari, M.; Mojra, A.; Sadeghi, S.

    2016-02-01

    Intraoperative Thermal Imaging (ITI) is a new minimally invasive diagnosis technique that can potentially locate margins of brain tumor in order to achieve maximum tumor resection with least morbidity. This study introduces a new approach to ITI based on artificial tactile sensing (ATS) technology in conjunction with artificial neural networks (ANN) and feasibility and applicability of this method in diagnosis and localization of brain tumors is investigated. In order to analyze validity and reliability of the proposed method, two simulations were performed. (i) An in vitro experimental setup was designed and fabricated using a resistance heater embedded in agar tissue phantom in order to simulate heat generation by a tumor in the brain tissue; and (ii) A case report patient with parafalcine meningioma was presented to simulate ITI in the neurosurgical procedure. In the case report, both brain and tumor geometries were constructed from MRI data and tumor temperature and depth of location were estimated. For experimental tests, a novel assisted surgery robot was developed to palpate the tissue phantom surface to measure temperature variations and ANN was trained to estimate the simulated tumor’s power and depth. Results affirm that ITI based ATS is a non-invasive method which can be useful to detect, localize and characterize brain tumors.

  11. Estimating soil matric potential in Owens Valley, California

    USGS Publications Warehouse

    Sorenson, Stephen K.; Miller, Reuben F.; Welch, Michael R.; Groeneveld, David P.; Branson, Farrel A.

    1989-01-01

    Much of the floor of Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first, the filter-paper method, uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The previously published calibration relations used to estimate soil matric potential from the water content of the filter papers were modified on the basis of current laboratory data. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base-10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. The slope and intercepts of this function vary with the texture and saturation capacity of the soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1-m depth intervals derived by using the hand auger and filter-paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter-paper method could be obtained 90 to 95 percent of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures.

  12. Evaluation of thermobarometry for spinel lherzolite fragments in alkali basalts

    NASA Astrophysics Data System (ADS)

    Ozawa, Kazuhito; Youbi, Nasrrddine; Boumehdi, Moulay Ahmed; McKenzie, Dan; Nagahara, Hiroko

    2017-04-01

    Geothermobarometry of solid fragments in kimberlite and alkali basalts, generally called "xenoliths", provides information on thermal and chemical structure of lithospheric and asthenospheric mantle, based on which various chemical, thermal, and rheological models of lithosphere have been constructed (e.g., Griffin et al., 2003; McKenzie et al., 2005; Ave Lallemant et al., 1980). Geothermobarometry for spinel-bearing peridotite fragments, which are frequently sampled from Phanerozoic provinces in various tectonic environments (Nixon and Davies, 1987), has essential difficulties, and it is usually believed that appropriated barometers do not exist for them (O'Reilly et al., 1997; Medaris et al., 1999). Ozawa et al. (2016; EGU) proposed a method of geothermobarometry for spinel lherzolite fragments. They applied the method to mantle fragments in alkali basalts from Bou Ibalhatene maars in the Middle Atlas in Morocco (Raffone et al. 2009; El Azzouzi et al., 2010; Witting et al., 2010; El Messbahi et al., 2015). Ozawa et al. (2016) obtained 0.5GPa pressure difference (1.5-2.0GPa) for 100°C variation in temperatures (950-1050°C). However, it is imperative to verify the results on the basis of completely independent data. There are three types of independent information: (1) time scale of solid fragment extraction, which may be provided by kinetics of reactions induced by heating and/or decompression during their entrapment in the host magma and transportation to the Earth's surface (Smith, 1999), (2) depth of the host basalt formation, which may be provided by the petrological and geochemical studies of the host basalts, and (3) lithosphere-asthenosphere boundary depths, which may be estimated by geophysical observations. Among which, (3) is shown to be consistent with the result in Ozawa et al. (2016). We here present that the estimated thermal structure just before the fragment extraction is fully supported by the information of (1) and (2). Spera (1984) reviewed various method of estimation of ascent rate of mantle fragments in kimberlite and alkali basalt; one based on fluid dynamics of transportation of entrapped fragments by giving the maximum size and viscosity of magma as a minimum estimate (Spera, 1980) and the other by coupling depth of fragment residence before the entrapment in a magma and time scale of heating by the magma. The depth of entrapment, however, is the least known parameter for spinel lherzolite. Because of nearly adiabatic ascent of magmas loaded with solid fragments, all the fragments underwent the same heating and decompression history with difference in entrapment depth and thus heating duration, from which the depth of their residence just before the extraction may be estimated if ascent rate is known. Therefore, extent of chemical and textural modification induced by heating and decompression may provide independent test for pressure estimation. We have used several reactions for this purpose: (1) Mg-Fe exchange reaction between spinel and olivine (Ozawa, 1983; 1984), (2) Ca zoning in olivine (Takahashi, 1980), (3) partial dissolution of clinopyroxene, (4) partial dissolution of spinel, and (5) formation of melt frozen as glass, which is related to (3) and (4). The depth of melt generation is constrained to be deeper than 70km by modeling the trace element compositions of the host magmas using the methods of McKenzie and O'Nions (1991) and data from El Azzouzi et al. (2010). The host magmas can be produced by melting the convecting upper mantle without requirement of any input from the continental lithosphere. This is consistent with the positive gravity anomalies in the NW Africa showing shallow upwelling in this region allowing decompressional melting owing to the thinner lithosphere in the Middle Atlas.

  13. Uncertainty in cloud optical depth estimates made from satellite radiance measurements

    NASA Technical Reports Server (NTRS)

    Pincus, Robert; Szczodrak, Malgorzata; Gu, Jiujing; Austin, Philip

    1995-01-01

    The uncertainty in optical depths retrieved from satellite measurements of visible wavelength radiance at the top of the atmosphere is quantified. Techniques are briefly reviewed for the estimation of optical depth from measurements of radiance, and it is noted that these estimates are always more uncertain at greater optical depths and larger solar zenith angles. The lack of radiometric calibration for visible wavelength imagers on operational satellites dominates the uncertainty retrievals of optical depth. This is true for both single-pixel retrievals and for statistics calculated from a population of individual retrievals. For individual estimates or small samples, sensor discretization can also be significant, but the sensitivity of the retrieval to the specification of the model atmosphere is less important. The relative uncertainty in calibration affects the accuracy with which optical depth distributions measured by different sensors may be quantitatively compared, while the absolute calibration uncertainty, acting through the nonlinear mapping of radiance to optical depth, limits the degree to which distributions measured by the same sensor may be distinguished.

  14. Combining energy and Laplacian regularization to accurately retrieve the depth of brain activity of diffuse optical tomographic data

    NASA Astrophysics Data System (ADS)

    Chiarelli, Antonio M.; Maclin, Edward L.; Low, Kathy A.; Mathewson, Kyle E.; Fabiani, Monica; Gratton, Gabriele

    2016-03-01

    Diffuse optical tomography (DOT) provides data about brain function using surface recordings. Despite recent advancements, an unbiased method for estimating the depth of absorption changes and for providing an accurate three-dimensional (3-D) reconstruction remains elusive. DOT involves solving an ill-posed inverse problem, requiring additional criteria for finding unique solutions. The most commonly used criterion is energy minimization (energy constraint). However, as measurements are taken from only one side of the medium (the scalp) and sensitivity is greater at shallow depths, the energy constraint leads to solutions that tend to be small and superficial. To correct for this bias, we combine the energy constraint with another criterion, minimization of spatial derivatives (Laplacian constraint, also used in low resolution electromagnetic tomography, LORETA). Used in isolation, the Laplacian constraint leads to solutions that tend to be large and deep. Using simulated, phantom, and actual brain activation data, we show that combining these two criteria results in accurate (error <2 mm) absorption depth estimates, while maintaining a two-point spatial resolution of <24 mm up to a depth of 30 mm. This indicates that accurate 3-D reconstruction of brain activity up to 30 mm from the scalp can be obtained with DOT.

  15. How Choice of Depth Horizon Influences the Estimated Spatial Patterns and Global Magnitude of Ocean Carbon Export Flux

    NASA Astrophysics Data System (ADS)

    Palevsky, Hilary I.; Doney, Scott C.

    2018-05-01

    Estimated rates and efficiency of ocean carbon export flux are sensitive to differences in the depth horizons used to define export, which often vary across methodological approaches. We evaluate sinking particulate organic carbon (POC) flux rates and efficiency (e-ratios) in a global earth system model, using a range of commonly used depth horizons: the seasonal mixed layer depth, the particle compensation depth, the base of the euphotic zone, a fixed depth horizon of 100 m, and the maximum annual mixed layer depth. Within this single dynamically consistent model framework, global POC flux rates vary by 30% and global e-ratios by 21% across different depth horizon choices. Zonal variability in POC flux and e-ratio also depends on the export depth horizon due to pronounced influence of deep winter mixing in subpolar regions. Efforts to reconcile conflicting estimates of export need to account for these systematic discrepancies created by differing depth horizon choices.

  16. Updates to Enhanced Geothermal System Resource Potential Estimate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad

    The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less

  17. Update to Enhanced Geothermal System Resource Potential Estimate: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad

    2016-10-01

    The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less

  18. Screening-level estimates of mass discharge uncertainty from point measurement methods

    EPA Science Inventory

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...

  19. A method for mapping apparent stress and energy radiation applied to the 1994 Northridge earthquake fault zone

    USGS Publications Warehouse

    McGarr, A.; Fletcher, Joe B.

    2000-01-01

    Using the Northridge earthquake as an example, we demonstrate a new technique able to resolve apparent stress within subfaults of a larger fault plane. From the model of Wald et al. (1996), we estimated apparent stress for each subfault using τa = (G/β)/2 where G is the modulus of rigidity, β is the shear wave speed, and is the average slip rate. The image of apparent stress mapped over the Northridge fault plane supports the idea that the stresses causing fault slip are inhomogeneous, but limited by the strength of the crust. Indeed, over the depth range 5 to 17 km, maximum values of apparent stress for a given depth interval agree with τa(max)=0.06S(z), where S is the laboratory estimate of crustal strength as a function of depth z. The seismic energy from each subfault was estimated from the product τaDA, where A is subfault area and D its slip. Over the fault zone, we found that the radiated energy is quite variable spatially, with more than 50% of the total coming from just 15% of the subfaults.

  20. Comparison of stable boundary layer depth estimation from sodar and profile mast.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Elsa; Anderson, Philip

    2015-04-01

    The depth of the atmospheric turbulent mixing layer next to the earths surface, hz, is a key parameter in analysis and modeling of the interaction of the atmosphere with the surface. The transfer of momentum, heat, moisture and trace gases are to a large extent governed by this depth, which to a first approximation acts as a finite reservoir to these quantities. Correct estimates of the evolution of hz assists the would allow accurate prognosis of the near-surface accumulation of these variables, that is, wind speed, temperature, humidity and tracer concentration. Measuring hz however is not simple, especially where stable stratification acts to reduce internal mixing, and indeed, it is not clear whether hz is similar for momentum, heat and tracer. Two methods are compared here, to assess their similarity: firstly using acoustic back-scatter is used as an indicator of turbulent strength, the upper limit implying a change to laminar flow and the top of the boundary layer. Secondly, turbulence kinetic energy profiles, TKE(z), are extrapolated to estimate z for TKE(z) = 0, again implying laminar flow. Both techniques have the implied benefit of being able to run continually (via sodar and turbulence mast respectively) with the prospect of continual, autonomous data analysis generating time series of hz. This report examines monostatic sodar echo and sonic anemometer-derived turbulence profile data from Halley Station on the Brunt Ice Shelf Antarctica, during the austral winter of 2003. We report that the two techniques frequently show significant disagreement in estimated depth, and still require manual intervention, but further progress is possible.

  1. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    PubMed

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  2. Continuous measurements of flow rate in a shallow gravel-bed river by a new acoustic system

    NASA Astrophysics Data System (ADS)

    Kawanisi, K.; Razaz, M.; Ishikawa, K.; Yano, J.; Soltaniasl, M.

    2012-05-01

    The continuous measurement of river discharge for long periods of time is crucial in water resource studies. However, the accurate estimation of river discharge is a difficult and labor-intensive procedure; thus, a robust and efficient method of measurement is required. Continuous measurements of flowrate have been carried out in a wide, shallow gravel bed river (water depth ≈ 0.6 m under low-flow conditions, width ≈ 115 m) using Fluvial Acoustic Tomography System (FATS) that has 25 kHz broadband transducers with horizontally omnidirectional and vertically hemispherical beam patterns. Reciprocal sound transmissions were performed between the two acoustic stations located diagonally on both sides of the river. The horizontal distance between the transducers was 301.96 m. FATS enabled the measurement of the depth- and range-averaged sound speed and flow velocity along the ray path. In contrast to traditional point/transect measurements of discharge, in a fraction of a second, FATS covers the entire cross section of river in a single measurement. The flow rates measured by FATS were compared to those estimated by moving boat Acoustic Doppler Current Profiler (ADCP) and rating curve (RC) methods. FATS estimates were in good agreement with ADCP estimates over a range of 20 to 65 m3 s-1. The RMS of residual between the two measurements was 2.41 m3 s-1. On the other hand the flowrate by RC method fairly agreed with FATS estimates for greater discharges than around 40 m3 s-1. This inconsistency arises from biased RC estimates in low flows. Thus, the flow rates derived from FATS could be considered reliable.

  3. Exploring the Application of Optical Remote Sensing as a Method to Estimate the Depth of Backwater Nursery Habitats of the Colorado Pikeminnow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Yuki; LaGory, Kirk E.

    2016-02-01

    Low-velocity channel-margin habitats serve as important nursery habitats for the endangered Colorado pikeminnow (Ptychocheilus lucius) in the middle Green River between Jensen and Ouray, Utah. These habitats, known as backwaters, are associated with emergent sand bars, and are shaped and reformed annually by peak flows. A recent synthesis of information on backwater characteristics and the factors that influence inter-annual variability in those backwaters (Grippo et al. 2015) evaluated detailed survey information collected annually since 2003 on a relatively small sample of backwaters, as well as reach-wide evaluations of backwater surface area from aerial and satellite imagery. An approach is neededmore » to bridge the gap between these detailed surveys, which estimate surface area, volume, and depth, and the reach-wide assessment of surface area to enable an assessment of the amount of habitat that meets the minimum depth requirements for suitable habitat.« less

  4. Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters

    NASA Astrophysics Data System (ADS)

    Kim, T.; Kim, Y. S.

    2017-12-01

    The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).

  5. Population estimate of Chinese mystery snail (Bellamya chinensis) in a Nebraska reservoir

    USGS Publications Warehouse

    Chaine, Noelle M.; Allen, Craig R.; Fricke, Kent A.; Haak, Danielle M.; Hellman, Michelle L.; Kill, Robert A.; Nemec, Kristine T.; Pope, Kevin L.; Smeenk, Nicholas A.; Stephen, Bruce J.; Uden, Daniel R.; Unstad, Kody M.; VanderHam, Ashley E.

    2012-01-01

    The Chinese mystery snail (Bellamya chinensis) is an aquatic invasive species in North America. Little is known regarding this species' impacts on freshwater ecosystems. It is be lieved that population densities can be high, yet no population estimates have been reported. We utilized a mark-recapture approach to generate a population estimate for Chinese mystery snail in Wild Plum Lake, a 6.47-ha reservoir in southeast Nebraska. We calculated, using bias-adjusted Lincoln-Petersen estimation, that there were approximately 664 adult snails within a 127 m2 transect (5.2 snails/m2). If this density was consistent throughout the littoral zone (<3 m in depth) of the reservoir, then the total adult population in this impoundment is estimated to be 253,570 snails, and the total Chinese mystery snail wet biomass is estimated to be 3,119 kg (643 kg/ha). If this density is confined to the depth sampled in this study (1.46 m), then the adult population is estimated to be 169,400 snails, and wet biomass is estimated to be 2,084 kg (643 kg/ha). Additional research is warranted to further test the utility of mark-recapture methods for aquatic snails and to better understand Chinese mystery snail distributions within reservoirs.

  6. Derivation and Validation of Supraglacial Lake Volumes on the Greenland Ice Sheet from High-Resolution Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Moussavi, Mahsa S.; Abdalati, Waleed; Pope, Allen; Scambos, Ted; Tedesco, Marco; MacFerrin, Michael; Grigsby, Shane

    2016-01-01

    Supraglacial meltwater lakes on the western Greenland Ice Sheet (GrIS) are critical components of its surface hydrology and surface mass balance, and they also affect its ice dynamics. Estimates of lake volume, however, are limited by the availability of in situ measurements of water depth,which in turn also limits the assessment of remotely sensed lake depths. Given the logistical difficulty of collecting physical bathymetric measurements, methods relying upon in situ data are generally restricted to small areas and thus their application to largescale studies is difficult to validate. Here, we produce and validate spaceborne estimates of supraglacial lake volumes across a relatively large area (1250 km(exp 2) of west Greenland's ablation region using data acquired by the WorldView-2 (WV-2) sensor, making use of both its stereo-imaging capability and its meter-scale resolution. We employ spectrally-derived depth retrieval models, which are either based on absolute reflectance (single-channel model) or a ratio of spectral reflectances in two bands (dual-channel model). These models are calibrated by usingWV-2multispectral imagery acquired early in the melt season and depth measurements from a high resolutionWV-2 DEM over the same lake basins when devoid of water. The calibrated models are then validated with different lakes in the area, for which we determined depths. Lake depth estimates based on measurements recorded in WV-2's blue (450-510 nm), green (510-580 nm), and red (630-690 nm) bands and dual-channel modes (blue/green, blue/red, and green/red band combinations) had near-zero bias, an average root-mean-squared deviation of 0.4 m (relative to post-drainage DEMs), and an average volumetric error of b1%. The approach outlined in this study - image-based calibration of depth-retrieval models - significantly improves spaceborne supraglacial bathymetry retrievals, which are completely independent from in situ measurements.

  7. Approaching bathymetry estimation from high resolution multispectral satellite images using a neuro-fuzzy technique

    NASA Astrophysics Data System (ADS)

    Corucci, Linda; Masini, Andrea; Cococcioni, Marco

    2011-01-01

    This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.

  8. Discharge estimation from H-ADCP measurements in a tidal river subject to sidewall effects and a mobile bed

    NASA Astrophysics Data System (ADS)

    Sassi, M. G.; Hoitink, A. J. F.; Vermeulen, B.; Hidayat, null

    2011-06-01

    Horizontal acoustic Doppler current profilers (H-ADCPs) can be employed to estimate river discharge based on water level measurements and flow velocity array data across a river transect. A new method is presented that accounts for the dip in velocity near the water surface, which is caused by sidewall effects that decrease with the width to depth ratio of a channel. A boundary layer model is introduced to convert single-depth velocity data from the H-ADCP to specific discharge. The parameters of the model include the local roughness length and a dip correction factor, which accounts for the sidewall effects. A regression model is employed to translate specific discharge to total discharge. The method was tested in the River Mahakam, representing a large river of complex bathymetry, where part of the flow is intrinsically three-dimensional and discharge rates exceed 8000 m3 s-1. Results from five moving boat ADCP campaigns covering separate semidiurnal tidal cycles are presented, three of which are used for calibration purposes, whereas the remaining two served for validation of the method. The dip correction factor showed a significant correlation with distance to the wall and bears a strong relation to secondary currents. The sidewall effects appeared to remain relatively constant throughout the tidal cycles under study. Bed roughness length is estimated at periods of maximum velocity, showing more variation at subtidal than at intratidal time scales. Intratidal variations were particularly obvious during bidirectional flow conditions, which occurred only during conditions of low river discharge. The new method was shown to outperform the widely used index velocity method by systematically reducing the relative error in the discharge estimates.

  9. Aquifer Recharge Estimation In Unsaturated Porous Rock Using Darcian And Geophysical Methods.

    NASA Astrophysics Data System (ADS)

    Nimmo, J. R.; De Carlo, L.; Masciale, R.; Turturro, A. C.; Perkins, K. S.; Caputo, M. C.

    2016-12-01

    Within the unsaturated zone a constant downward gravity-driven flux of water commonly exists at depths ranging from a few meters to tens of meters depending on climate, medium, and vegetation. In this case a steady-state application of Darcy's law can provide recharge rate estimates.We have applied an integrated approach that combines field geophysical measurements with laboratory hydraulic property measurements on core samples to produce accurate estimates of steady-state aquifer recharge, or, in cases where episodic recharge also occurs, the steady component of recharge. The method requires (1) measurement of the water content existing in the deep unsaturated zone at the location of a core sample retrieved for lab measurements, and (2) measurement of the core sample's unsaturated hydraulic conductivity over a range of water content that includes the value measured in situ. Both types of measurements must be done with high accuracy. Darcy's law applied with the measured unsaturated hydraulic conductivity and gravitational driving force provides recharge estimates.Aquifer recharge was estimated using Darcian and geophysical methods at a deep porous rock (calcarenite) experimental site in Canosa, southern Italy. Electrical Resistivity Tomography (ERT) and Vertical Electrical Sounding (VES) profiles were collected from the land surface to water table to provide data for Darcian recharge estimation. Volumetric water content was estimated from resistivity profiles using a laboratory-derived calibration function based on Archie's law for rock samples from the experimental site, where electrical conductivity of the rock was related to the porosity and water saturation. Multiple-depth core samples were evaluated using the Quasi-Steady Centrifuge (QSC) method to obtain hydraulic conductivity (K), matric potential (ψ), and water content (θ) estimates within this profile. Laboratory-determined unsaturated hydraulic conductivity ranged from 3.90 x 10-9 to 1.02 x 10-5 m/s over a volumetric water content range from 0.1938 to 0.4311 m3/m3. Using these measured properties, the water content estimated from geophysical measurements has been used to identify the unsaturated hydraulic conductivity indicative of the steady component of the aquifer recharge rate at Canosa.

  10. Soil temperature synchronisation improves estimation of daily variation of ecosystem respiration in Sphagnum peatlands

    NASA Astrophysics Data System (ADS)

    D'Angelo, Benoît; Gogo, Sébastien; Le Moing, Franck; Jégou, Fabrice; Guimbaud, Christophe; Laggoun, Fatima

    2015-04-01

    Ecosystem respiration (ER) is a key process in the global C cycle and thus, plays an important role in the climate regulation. Peatlands contain a third of the world soil C in spite of their relatively low global area (3% of land area). Although these ecosystems represent potentially a significant source of C under global change, they are still not taken into account accordingly in global climatic models. Therefore, ER variations have to be accounted for, especially by estimating its dependence to temperature.s The relationship between ER and temperature often relies only on one soil temperature depth and the latter is generally taken in the first 10 centimetres. Previous studies showed that the temperature dependence of ER depends on the depth at which the temperature is recorded. The depth selection for temperature measurement is thus a predominant issue. A way to deal with this is to analyse the time-delay between ER and temperature. The aim of this work is to assess whether using synchronised data in models leads to a better ER daily variation estimation than using non-synchronised data. ER measurements were undertaken in 2013 in 4 Sphagnum peatlands across France: La Guette (N 47°19'44', E 2°17'04', 154m) in July, Landemarais (N 48°26'30', E -1°10'54', 145m) in August, Frasne (N 46°49'35', E 6°10'20', 836m) in September, and Bernadouze (N 42°48'09', E 1°25'24', 1500m) in October. A closed method chamber was used to measure ER hourly during 72 hours in each of the 4 replicates installed in each site. Average ER ranged from 1.75 μmol m-2 s-1 to 6.13 μmol m-2 s-1. A weather station was used to record meteorological data and soil temperature profiles (5, 10, 20 and 30 cm). Synchronised data were determined for each depth by selecting the time-delay leading to the best correlation between ER and soil temperature. The data were used to simulate ER according to commonly used equations: linear, exponential with Q10, Arrhenius, Lloyd and Taylor. Models comparison was performed using RMSE (goodness-of-fit) and AIC (goodness-of-fit and model complexity) as indicators to assess their relative quality. Both indicators showed a wide variation between sites. However, for each site differences between synchronised and non-synchronised data were larger than the differences between models equations. According to the AIC, models using synchronised data produced better ER estimations than models using non-synchronised data, at all depth. RMSE support this result for all sites for superficial peat layer. In some locations, mainly Frasne, synchronised data at 5 cm depth provide better estimation than air temperature, i.e. 25.0 vs. 26.4 for RMSE and 337.1 vs. 379.8 for AIC, respectively. The equation of the most appropriate model varies between sites, but the differences between them are small. At a daily scale, data synchronisation in Sphagnum peatlands improves ER estimation regardless of the model used. Moreover, to estimate ER flux, the use of synchronised data at 5 cm depth seems the most adequate method.

  11. Delineating depth to bedrock beneath shallow unconfined aquifers: a gravity transect across the Palmer River Basin.

    PubMed

    Bohidar, R N; Sullivan, J P; Hermance, J F

    2001-01-01

    In view of the increasing demand on ground water supplies in the northeastern United States, it is imperative to develop appropriate methods to geophysically characterize the most widely used sources of ground water in the region: shallow unconfined aquifers consisting of well-sorted, stratified glacial deposits laid down in bedrock valleys and channels. The gravity method, despite its proven value in delineating buried bedrock valleys elsewhere, is seldom used by geophysical contractors in this region. To demonstrate the method's effectiveness for evaluating such aquifers, a pilot study was undertaken in the Palmer River Basin in southeastern Massachusetts. Because bedrock is so shallow beneath this aquifer (maximum depth is 30 m), the depth-integrated mass deficiency of the overlying unconsolidated material was small, so that the observed gravity anomaly was on the order of 1 milligal (mGal) or less. Thus data uncertainties were significant. Moreover, unlike previous gravity studies elsewhere, we had no a priori information on the density of the sediment. Under such circumstances, it is essential to include model constraints and weighted least-squares in the inversion procedure. Among the model constraints were water table configuration, bedrock outcrops, and depth to bedrock from five water wells. Our procedure allowed us to delineate depth to bedrock along a 3.5 km profile with a confidence interval of 1.8 m at a nominal depth of 17 m. Moreover, we obtained a porosity estimate in the range of 39% to 44%. Thus the gravity method, with appropriate refinements, is an effective tool for the reconnaissance of shallow unconfined aquifers.

  12. Improving the local wavenumber method by automatic DEXP transformation

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  13. Fusion of electromagnetic trackers to improve needle deflection estimation: simulation study.

    PubMed

    Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor

    2013-10-01

    We present a needle deflection estimation method to anticipate needle bending during insertion into deformable tissue. Using limited additional sensory information, our approach reduces the estimation error caused by uncertainties inherent in the conventional needle deflection estimation methods. We use Kalman filters to combine a kinematic needle deflection model with the position measurements of the base and the tip of the needle taken by electromagnetic (EM) trackers. One EM tracker is installed on the needle base and estimates the needle tip position indirectly using the kinematic needle deflection model. Another EM tracker is installed on the needle tip and estimates the needle tip position through direct, but noisy measurements. Kalman filters are then employed to fuse these two estimates in real time and provide a reliable estimate of the needle tip position, with reduced variance in the estimation error. We implemented this method to compensate for needle deflection during simulated needle insertions and performed sensitivity analysis for various conditions. At an insertion depth of 150 mm, we observed needle tip estimation error reductions in the range of 28% (from 1.8 to 1.3 mm) to 74% (from 4.8 to 1.2 mm), which demonstrates the effectiveness of our method, offering a clinically practical solution.

  14. The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation

    NASA Astrophysics Data System (ADS)

    Durner, Wolfgang; Iden, Sascha C.; von Unold, Georg

    2017-01-01

    The particle-size distribution (PSD) of a soil expresses the mass fractions of various sizes of mineral particles which constitute the soil material. It is a fundamental soil property, closely related to most physical and chemical soil properties and it affects almost any soil function. The experimental determination of soil texture, i.e., the relative amounts of sand, silt, and clay-sized particles, is done in the laboratory by a combination of sieving (sand) and gravitational sedimentation (silt and clay). In the latter, Stokes' law is applied to derive the particle size from the settling velocity in an aqueous suspension. Traditionally, there are two methodologies for particle-size analysis from sedimentation experiments: the pipette method and the hydrometer method. Both techniques rely on measuring the temporal change of the particle concentration or density of the suspension at a certain depth within the suspension. In this paper, we propose a new method which is based on the pressure in the suspension at a selected depth, which is an integral measure of all particles in suspension above the measuring depth. We derive a mathematical model which predicts the pressure decrease due to settling of particles as function of the PSD. The PSD of the analyzed sample is identified by fitting the simulated time series of pressure to the observed one by inverse modeling using global optimization. The new method yields the PSD in very high resolution and its experimental realization completely avoids any disturbance by the measuring process. A sensitivity analysis of different soil textures demonstrates that the method yields unbiased estimates of the PSD with very small estimation variance and an absolute error in the clay and silt fraction of less than 0.5%.

  15. The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation

    NASA Astrophysics Data System (ADS)

    Durner, Wolfgang; Iden, Sascha C.; von Unold, Georg

    2017-04-01

    The particle-size distribution (PSD) of a soil expresses the mass fractions of various sizes of mineral particles which constitute the soil material. It is a fundamental soil property, closely related to most physical and chemical soil properties and it affects almost any soil function. The experimental determination of soil texture, i.e., the relative amounts of sand, silt, and clay-sized particles, is done in the laboratory by a combination of sieving (sand) and gravitational sedimentation (silt and clay). In the latter, Stokes' law is applied to derive the particle size from the settling velocity in an aqueous suspension. Traditionally, there are two methodologies for particle-size analysis from sedimentation experiments: the pipette method and the hydrometer method. Both techniques rely on measuring the temporal change of the particle concentration or density of the suspension at a certain depth within the suspension. In this paper, we propose a new method which is based on the pressure in the suspension at a selected depth, which is an integral measure of all particles in suspension above the measuring depth. We derive a mathematical model which predicts the pressure decrease due to settling of particles as function of the PSD. The PSD of the analyzed sample is identified by fitting the simulated time series of pressure to the observed one by inverse modeling using global optimization. The new method yields the PSD in very high resolution and its experimental realization completely avoids any disturbance by the measuring process. A sensitivity analysis of different soil textures demonstrates that the method yields unbiased estimates of the PSD with very small estimation variance and an absolute error in the clay and silt fraction of less than 0.5%

  16. S-Wave Velocity Structure of the Taiwan Chelungpu Fault Drilling Project (TCDP) Site Using Microtremor Array Measurements

    NASA Astrophysics Data System (ADS)

    Wu, Cheng-Feng; Huang, Huey-Chu

    2015-10-01

    The Taiwan Chelungpu Fault Drilling Project (TCDP) drilled a 2-km-deep hole 2.4 km east of the surface rupture of the 1999 Chi-Chi earthquake ( M w 7.6), near the town of Dakeng. Geophysical well logs at the TCDP site were run over depths ranging from 500 to 1,900 m to obtain the physical properties of the fault zones and adjacent damage zones. These data provide good reference material for examining the validity of velocity structures using microtremor array measurement; therefore, we conduct array measurements for a total of four arrays at two sites near the TCDP drilling sites. The phase velocities at frequencies of 0.2-5 Hz are calculated using the frequency-wavenumber ( f- k) spectrum method. Then the S-wave velocity structures are estimated by employing surface wave inversion techniques. The S-wave velocity from the differential inversion technique gradually increases from 1.52 to 2.22 km/s at depths between 585 and 1,710 m. This result is similar to those from the velocity logs, which range from 1.4 km/s at a depth of 597 m to 2.98 km/s at a depth of 1,705 m. The stochastic inversion results are similar to those from the seismic reflection methods and the lithostratigraphy of TCDP-A borehole, comparatively. These results show that microtremor array measurement provides a good tool for estimating deep S-wave velocity structure.

  17. C-Depth Method to Determine Diffusion Coefficient and Partition Coefficient of PCB in Building Materials.

    PubMed

    Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping

    2015-10-20

    Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.

  18. Minimum depth of investigation for grounded-wire TEM due to self-transients

    NASA Astrophysics Data System (ADS)

    Zhou, Nannan; Xue, Guoqiang

    2018-05-01

    The grounded-wire transient electromagnetic method (TEM) has been widely used for near-surface metalliferous prospecting, oil and gas exploration, and hydrogeological surveying in the subsurface. However, it is commonly observed that such TEM signal is contaminated by the self-transient process occurred at the early stage of data acquisition. Correspondingly, there exists a minimum depth of investigation, above which the observed signal is not applicable for reliable data processing and interpretation. Therefore, for achieving a more comprehensive understanding of the TEM method, it is necessary to perform research on the self-transient process and moreover develop an approach for quantifying the minimum detection depth. In this paper, we first analyze the temporal procedure of the equivalent circuit of the TEM method and present a theoretical equation for estimating the self-induction voltage based on the inductor of the transmitting wire. Then, numerical modeling is applied for building the relationship between the minimum depth of investigation and various properties, including resistivity of the earth, offset, and source length. It is guide for the design of survey parameters when the grounded-wire TEM is applied to the shallow detection. Finally, it is verified through applications to a coal field in China.

  19. Depth estimation of laser glass drilling based on optical differential measurements of acoustic response

    NASA Astrophysics Data System (ADS)

    Gorodesky, Niv; Ozana, Nisan; Berg, Yuval; Dolev, Omer; Danan, Yossef; Kotler, Zvi; Zalevsky, Zeev

    2016-09-01

    We present the first steps of a device suitable for characterization of complex 3D micro-structures. This method is based on an optical approach allowing extraction and separation of high frequency ultrasonic sound waves induced to the analyzed samples. Rapid, non-destructive characterization of 3D micro-structures are limited in terms of geometrical features and optical properties of the sample. We suggest a method which is based on temporal tracking of secondary speckle patterns generated when illuminating a sample with a laser probe while applying known periodic vibration using an ultrasound transmitter. In this paper we investigated lasers drilled through glass vias. The large aspect ratios of the vias possess a challenge for traditional microscopy techniques in analyzing depth and taper profiles of the vias. The correlation of the amplitude vibrations to the vias depths is experimentally demonstrated.

  20. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  1. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.

  2. Difference of brightness temperatures between 19.35 GHz and 37.0 GHz in CHANG'E-1 MRM: implications for the burial of shallow bedrock at lunar low latitude

    NASA Astrophysics Data System (ADS)

    Yu, Wen; Li, Xiongyao; Wei, Guangfei; Wang, Shijie

    2016-03-01

    Indications of buried lunar bedrock may help us to understand the tectonic evolution of the Moon and provide some clues for formation of lunar regolith. So far, the information on distribution and burial depth of lunar bedrock is far from sufficient. Due to good penetration ability, microwave radiation can be a potential tool to ameliorate this problem. Here, a novel method to estimate the burial depth of lunar bedrock is presented using microwave data from Chang'E-1 (CE-1) lunar satellite. The method is based on the spatial variation of differences in brightness temperatures between 19.35 GHz and 37.0 GHz (ΔTB). Large differences are found in some regions, such as the southwest edge of Oceanus Procellarum, the area between Mare Tranquillitatis and Mare Nectaris, and the highland east of Mare Smythii. Interestingly, a large change of elevation is found in the corresponding region, which might imply a shallow burial depth of lunar bedrock. To verify this deduction, a theoretical model is derived to calculate the ΔTB. Results show that ΔTB varies from 12.7 K to 15 K when the burial depth of bedrock changes from 1 m to 0.5 m in the equatorial region. Based on the available data at low lunar latitude (30°N-30°S), it is thus inferred that the southwest edge of Oceanus Procellarum, the area between Mare Tranquillitatis and Mare Nectaris, the highland located east of Mare Smythii, the edge of Pasteur and Chaplygin are the areas with shallow bedrock, the burial depth is estimated between 0.5 m and 1 m.

  3. Analysis of space radiation exposure levels at different shielding configurations by ray-tracing dose estimation method

    NASA Astrophysics Data System (ADS)

    Kartashov, Dmitry; Shurshakov, Vyacheslav

    2018-03-01

    A ray-tracing method to calculate radiation exposure levels of astronauts at different spacecraft shielding configurations has been developed. The method uses simplified shielding geometry models of the spacecraft compartments together with depth-dose curves. The depth-dose curves can be obtained with different space radiation environment models and radiation transport codes. The spacecraft shielding configurations are described by a set of geometry objects. To calculate the shielding probability functions for each object its surface is composed from a set of the disjoint adjacent triangles that fully cover the surface. Such description can be applied for any complex shape objects. The method is applied to the space experiment MATROSHKA-R modeling conditions. The experiment has been carried out onboard the ISS from 2004 to 2016. Dose measurements were realized in the ISS compartments with anthropomorphic and spherical phantoms, and the protective curtain facility that provides an additional shielding on the crew cabin wall. The space ionizing radiation dose distributions in tissue-equivalent spherical and anthropomorphic phantoms and for an additional shielding installed in the compartment are calculated. There is agreement within accuracy of about 15% between the data obtained in the experiment and calculated ones. Thus the calculation method used has been successfully verified with the MATROSHKA-R experiment data. The ray-tracing radiation dose calculation method can be recommended for estimation of dose distribution in astronaut body in different space station compartments and for estimation of the additional shielding efficiency, especially when exact compartment shielding geometry and the radiation environment for the planned mission are not known.

  4. Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory

    DTIC Science & Technology

    2016-05-12

    valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A

  5. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  6. Fast Estimation of Defect Profiles from the Magnetic Flux Leakage Signal Based on a Multi-Power Affine Projection Algorithm

    PubMed Central

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-01-01

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314

  7. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    PubMed

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-09-04

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  8. Nuclear Test Depth Determination with Synthetic Modelling: Global Analysis from PNEs to DPRK-2016

    NASA Astrophysics Data System (ADS)

    Rozhkov, Mikhail; Stachnik, Joshua; Baker, Ben; Epiphansky, Alexey; Bobrov, Dmitry

    2016-04-01

    Seismic event depth determination is critical for the event screening process at the International Data Center, CTBTO. A thorough determination of the event depth can be conducted mostly through additional special analysis because the IDC's Event Definition Criteria is based, in particular, on depth estimation uncertainties. This causes a large number of events in the Reviewed Event Bulletin to have depth constrained to the surface making the depth screening criterion not applicable. Further it may result in a heavier workload to manually distinguish between subsurface and deeper crustal events. Since the shape of the first few seconds of signal of very shallow events is very sensitive to the depth phases, cross correlation between observed and theoretic seismograms can provide a basis for the event depth estimation, and so an expansion to the screening process. We applied this approach mostly to events at teleseismic and partially regional distances. The approach was found efficient for the seismic event screening process, with certain caveats related mostly to poorly defined source and receiver crustal models which can shift the depth estimate. An adjustable teleseismic attenuation model (t*) for synthetics was used since this characteristic is not known for most of the rays we studied. We studied a wide set of historical records of nuclear explosions, including so called Peaceful Nuclear Explosions (PNE) with presumably known depths, and recent DPRK nuclear tests. The teleseismic synthetic approach is based on the stationary phase approximation with hudson96 program, and the regional modelling was done with the generalized ray technique by Vlastislav Cerveny modified to account for the complex source topography. The software prototype is designed to be used for the Expert Technical Analysis at the IDC. With this, the design effectively reuses the NDC-in-a-Box code and can be comfortably utilized by the NDC users. The package uses Geotool as a front-end for data retrieval and pre-processing. After the event database is compiled, the control is passed to the driver software, running the external processing and plotting toolboxes, which controls the final stage and produces the final result. The modules are mostly Python coded, C-coded (Raysynth3D complex topography regional synthetics) and FORTRAN coded synthetics from the CPS330 software package by Robert Herrmann of Saint Louis University. The extension of this single station depth determination method is under development and uses joint information from all stations participating in processing. It is based on simultaneous depth and moment tensor determination for both short and long period seismic phases. A novel approach recently developed for microseismic event location utilizing only phase waveform information was migrated to a global scale. It should provide faster computation as it does not require intensive synthetic modelling, and might benefit processing noisy signals. A consistent depth estimate for all recent nuclear tests was produced for the vast number of IMS stations (primary and auxiliary) used in processing.

  9. Soil Moisture Content Estimation using GPR Reflection Travel Time

    NASA Astrophysics Data System (ADS)

    Lunt, I. A.; Hubbard, S. S.; Rubin, Y.

    2003-12-01

    Ground-penetrating radar (GPR) reflection travel time data were used to estimate changes in soil water content under a range of soil saturation conditions throughout the growing season at a California winery. Data were collected during four data acquisition campaigns over an 80 by 180 m area using 100 MHz surface GPR antennae. GPR reflections were associated with a thin, low permeability clay layer located between 0.8 to 1.3 m below the ground surface that was calibrated with borehole information and mapped across the study area. Field infiltration tests and neutron probe logs suggest that the thin clay layer inhibited vertical water flow, and was coincident with high volumetric water content (VWC) values. The GPR reflection two-way travel time and the depth of the reflector at borehole locations were used to calculate an average dielectric constant for soils above the reflector. A site-specific relationship between the dielectric constant and VWC was then used to estimate the depth-averaged VWC of the soils above the reflector. Compared to average VWC measurements from calibrated neutron probe logs over the same depth interval, the average VWC estimates obtained from GPR reflections had an RMS error of 2 percent. We also investigated the estimation of VWC using reflections associated with an advancing water front, and found that estimates of average VWC to the water front could be obtained with similar accuracy. These results suggested that the two-way travel time to a GPR reflection associated with a geological surface or wetting front can be used under natural conditions to obtain estimates of average water content when borehole control is available. The GPR reflection method therefore has potential for monitoring soil water content over large areas and under variable hydrological conditions.

  10. Estimating steady-state evaporation rates from bare soils under conditions of high water table

    USGS Publications Warehouse

    Ripple, C.D.; Rubin, J.; Van Hylckama, T. E. A.

    1970-01-01

    A procedure that combines meteorological and soil equations of water transfer makes it possible to estimate approximately the steady-state evaporation from bare soils under conditions of high water table. Field data required include soil-water retention curves, water table depth and a record of air temperature, air humidity and wind velocity at one elevation. The procedure takes into account the relevant atmospheric factors and the soil's capability to conduct 'water in liquid and vapor forms. It neglects the effects of thermal transfer (except in the vapor case) and of salt accumulation. Homogeneous as well as layered soils can be treated. Results obtained with the method demonstrate how the soil evaporation rates·depend on potential evaporation, water table depth, vapor transfer and certain soil parameters.

  11. Multispectral guided fluorescence diffuse optical tomography using upconverting nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svenmarker, Pontus, E-mail: pontus.svenmarker@physics.umu.se; Department of Physics, Umeå University, SE-901 87 Umeå; Centre for Microbial Research

    2014-02-17

    We report on improved image detectability for fluorescence diffuse optical tomography using upconverting nanoparticles doped with rare-earth elements. Core-shell NaYF{sub 4}:Yb{sup 3+}/Er{sup 3+}@NaYF{sub 4} upconverting nanoparticles were synthesized through a stoichiometric method. The Yb{sup 3+}/Er{sup 3+} sensitizer-activator pair yielded two anti-Stokes shifted fluorescence emission bands at 540 nm and 660 nm, here used to a priori estimate the fluorescence source depth with sub-millimeter precision. A spatially varying regularization incorporated the a priori fluorescence source depth estimation into the tomography reconstruction scheme. Tissue phantom experiments showed both an improved resolution and contrast in the reconstructed images as compared to not using any amore » priori information.« less

  12. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  13. Development of method for evaluating estimated inundation area by using river flood analysis based on multiple flood scenarios

    NASA Astrophysics Data System (ADS)

    Ono, T.; Takahashi, T.

    2017-12-01

    Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area to predict efficiently and accurately. The river flood analysis by using this proposed method will contribute to mitigate flood disaster by improving the accuracy of estimated inundation area.

  14. Effects of unsaturated zone on ground-water mounding

    USGS Publications Warehouse

    Sumner, D.M.; Rolston, D.E.; Marino, M.A.

    1999-01-01

    The design of infiltration basins used to dispose of treated wastewater or for aquifer recharge often requires estimation of ground-water mounding beneath the basin. However, the effect that the unsaturated zone has on water-table response to basin infiltration often has been overlooked in this estimation. A comparison was made between two methods used to estimate ground-water mounding-an analytical approach that is limited to the saturated zone and a numerical approach that incorporates both the saturated and the unsaturated zones. Results indicate that the error that is introduced by a method that ignores the effects of the unsaturated zone on ground-water mounding increases as the basin-loading period is shortened; as the depth to the water table increases, with increasing subsurface anisotropy; and with the inclusion of fine-textured strata. Additionally, such a method cannot accommodate the dynamic nature of basin infiltration, the finite transmission time of the infiltration front to the water table, or the interception of the basin floor by the capillary fringe.The design of infiltration basins used to dispose of treated wastewater or for aquifer recharge often requires estimation of ground-water mounding beneath the basin. However, the effect that the unsaturated zone has on water-table response to basin infiltration often has been overlooked in this estimation. A comparison was made between two methods used to estimate ground-water mounding - an analytical approach that is limited to the saturated zone and a numerical approach that incorporates both the saturated and the unsaturated zones. Results indicate that the error that is introduced by a method that ignores the effects of the unsaturated zone on ground-water mounding increases as the basin-loading period is shortened; as the depth to the water table increases, with increasing subsurface anisotropy; and with the inclusion of fine-textured strata. Additionally, such a method cannot accommodate the dynamic nature of basin infiltration, the finite transmission time of the infiltration front to the water, or the interception of the basin floor by the capillary fringe.

  15. State and parameter estimation of two land surface models using the ensemble Kalman filter and the particle filter

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjuan; Hendricks Franssen, Harrie-Jan; Han, Xujun; Vrugt, Jasper A.; Vereecken, Harry

    2017-09-01

    Land surface models (LSMs) use a large cohort of parameters and state variables to simulate the water and energy balance at the soil-atmosphere interface. Many of these model parameters cannot be measured directly in the field, and require calibration against measured fluxes of carbon dioxide, sensible and/or latent heat, and/or observations of the thermal and/or moisture state of the soil. Here, we evaluate the usefulness and applicability of four different data assimilation methods for joint parameter and state estimation of the Variable Infiltration Capacity Model (VIC-3L) and the Community Land Model (CLM) using a 5-month calibration (assimilation) period (March-July 2012) of areal-averaged SPADE soil moisture measurements at 5, 20, and 50 cm depths in the Rollesbroich experimental test site in the Eifel mountain range in western Germany. We used the EnKF with state augmentation or dual estimation, respectively, and the residual resampling PF with a simple, statistically deficient, or more sophisticated, MCMC-based parameter resampling method. The performance of the calibrated LSM models was investigated using SPADE water content measurements of a 5-month evaluation period (August-December 2012). As expected, all DA methods enhance the ability of the VIC and CLM models to describe spatiotemporal patterns of moisture storage within the vadose zone of the Rollesbroich site, particularly if the maximum baseflow velocity (VIC) or fractions of sand, clay, and organic matter of each layer (CLM) are estimated jointly with the model states of each soil layer. The differences between the soil moisture simulations of VIC-3L and CLM are much larger than the discrepancies among the four data assimilation methods. The EnKF with state augmentation or dual estimation yields the best performance of VIC-3L and CLM during the calibration and evaluation period, yet results are in close agreement with the PF using MCMC resampling. Overall, CLM demonstrated the best performance for the Rollesbroich site. The large systematic underestimation of water storage at 50 cm depth by VIC-3L during the first few months of the evaluation period questions, in part, the validity of its fixed water table depth at the bottom of the modeled soil domain.

  16. Detection of underground voids in Ohio by use of geophysical methods

    USGS Publications Warehouse

    Munk, Jens; Sheets, R.A.

    1997-01-01

    Geophysical methods are generally classified as electrical, potential field, and seismic methods. Each method type relies on contrasts of physical properties in the subsurface. Forward models based on the physical properties of air- and water-filled voids within common geologic materials indicate that several geophysical methods are technically feasible for detection of subsurface voids in Ohio, but ease of use and interpretation varies widely between the methods. Ground-penetrating radar is the most rapid and cost-effective method for collection of subsurface data in areas associated with voids under roadways. Electrical resistivity, gravity, or seismic reflection methods have applications for direct delineation of voids, but data-collection and analytical procedures are more time consuming. Electrical resistivity, electromagnetic, or magnetic methods may be useful in locating areas where conductive material, such as rail lines, are present in abandoned underground coal mines. Other electrical methods include spontaneous potential and very low frequency (VLF); these latter two methods are considered unlikely candidates for locating underground voids in Ohio. Results of ground-penetrating radar surveys at three highway sites indicate that subsurface penetration varies widely with geologic material type and amount of cultural interference. Two highway sites were chosen over abandoned underground coal mines in eastern Ohio. A third site in western Ohio was chosen in an area known to be underlain by naturally occurring voids in lime stone. Ground-penetrating radar surveys at Interstate 470, in Belmont County, Ohio, indicate subsurface penetration of less than 15 feet over a mined coal seam that was known to vary in depth from 0 to 40 feet. Although no direct observations of voids were made, anomalous areas that may be related to collapse structures above voids were indicated. Cultural interference dominated the radar records at Interstate 70, Guernsey County, Ohio, where coal was mined under the site at a depth of about 50 feet. Interference from overhead powerlines, the field vehicle, and guardrails complicated an interpretation of the radar records where the depth of penetration was estimated to be less than 5 feet. Along State Route 33, in Logan County, Ohio, bedding planes and structures possibly associated with dissolution of limestone were profiled with ground-penetrating radar. Depth of penetration was estimated to be greater than 50 feet.

  17. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2015-08-01

    Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

  18. Estimation of Leakage Potential of Selected Sites in Interstate and Tri-State Canals Using Geostatistical Analysis of Selected Capacitively Coupled Resistivity Profiles, Western Nebraska, 2004

    USGS Publications Warehouse

    Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.

    2009-01-01

    With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.

  19. Characterization of highly multiplexed monolithic PET / gamma camera detector modules

    NASA Astrophysics Data System (ADS)

    Pierce, L. A.; Pedemonte, S.; DeWitt, D.; MacDonald, L.; Hunter, W. C. J.; Van Leemput, K.; Miyaoka, R.

    2018-04-01

    PET detectors use signal multiplexing to reduce the total number of electronics channels needed to cover a given area. Using measured thin-beam calibration data, we tested a principal component based multiplexing scheme for scintillation detectors. The highly-multiplexed detector signal is no longer amenable to standard calibration methodologies. In this study we report results of a prototype multiplexing circuit, and present a new method for calibrating the detector module with multiplexed data. A 50 × 50 × 10 mm3 LYSO scintillation crystal was affixed to a position-sensitive photomultiplier tube with 8 × 8 position-outputs and one channel that is the sum of the other 64. The 65-channel signal was multiplexed in a resistive circuit, with 65:5 or 65:7 multiplexing. A 0.9 mm beam of 511 keV photons was scanned across the face of the crystal in a 1.52 mm grid pattern in order to characterize the detector response. New methods are developed to reject scattered events and perform depth-estimation to characterize the detector response of the calibration data. Photon interaction position estimation of the testing data was performed using a Gaussian Maximum Likelihood estimator and the resolution and scatter-rejection capabilities of the detector were analyzed. We found that using a 7-channel multiplexing scheme (65:7 compression ratio) with 1.67 mm depth bins had the best performance with a beam-contour of 1.2 mm FWHM (from the 0.9 mm beam) near the center of the crystal and 1.9 mm FWHM near the edge of the crystal. The positioned events followed the expected Beer–Lambert depth distribution. The proposed calibration and positioning method exhibited a scattered photon rejection rate that was a 55% improvement over the summed signal energy-windowing method.

  20. Improving Snow Modeling by Assimilating Observational Data Collected by Citizen Scientists

    NASA Astrophysics Data System (ADS)

    Crumley, R. L.; Hill, D. F.; Arendt, A. A.; Wikstrom Jones, K.; Wolken, G. J.; Setiawan, L.

    2017-12-01

    Modeling seasonal snow pack in alpine environments includes a multiplicity of challenges caused by a lack of spatially extensive and temporally continuous observational datasets. This is partially due to the difficulty of collecting measurements in harsh, remote environments where extreme gradients in topography exist, accompanied by large model domains and inclement weather. Engaging snow enthusiasts, snow professionals, and community members to participate in the process of data collection may address some of these challenges. In this study, we use SnowModel to estimate seasonal snow water equivalence (SWE) in the Thompson Pass region of Alaska while incorporating snow depth measurements collected by citizen scientists. We develop a modeling approach to assimilate hundreds of snow depth measurements from participants in the Community Snow Observations (CSO) project (www.communitysnowobs.org). The CSO project includes a mobile application where participants record and submit geo-located snow depth measurements while working and recreating in the study area. These snow depth measurements are randomly located within the model grid at irregular time intervals over the span of four months in the 2017 water year. This snow depth observation dataset is converted into a SWE dataset by employing an empirically-based, bulk density and SWE estimation method. We then assimilate this data using SnowAssim, a sub-model within SnowModel, to constrain the SWE output by the observed data. Multiple model runs are designed to represent an array of output scenarios during the assimilation process. An effort to present model output uncertainties is included, as well as quantification of the pre- and post-assimilation divergence in modeled SWE. Early results reveal pre-assimilation SWE estimations are consistently greater than the post-assimilation estimations, and the magnitude of divergence increases throughout the snow pack evolution period. This research has implications beyond the Alaskan context because it increases our ability to constrain snow modeling outputs by making use of snow measurements collected by non-expert, citizen scientists.

  1. Method for estimating optimal spectral and energy parameters of laser irradiation in photodynamic therapy of biological tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lisenko, S A; Kugeiko, M M

    We have solved the problem of layer-by-layer laser-light dosimetry in biological tissues and of selecting an individual therapeutic dose in laser therapy. A method is proposed for real-time monitoring of the radiation density in tissue layers in vivo, concentrations of its endogenous (natural) and exogenous (specially administered) chromophores, as well as in-depth distributions of the spectrum of light action on these chromophores. As the background information use is made of the spectrum of diffuse light reflected from a patient's tissue, measured by a fibre-optic spectrophotometer. The measured spectrum is quantitatively analysed by the method of approximating functions for fluxes ofmore » light multiply scattered in tissue and by a semi-analytical method for calculating the in-depth distribution of the light flux in a multi-layered medium. We have shown the possibility of employing the developed method for monitoring photosensitizer and oxyhaemoglobin concentrations in tissue, light power absorbed by chromophores in tissue layers at different depths and laser-induced changes in the tissue morphology (vascular volume content and ratios of various forms of haemoglobin) during photodynamic therapy. (biophotonics)« less

  2. Noncontact methods for measuring water-surface elevations and velocities in rivers: Implications for depth and discharge extraction

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark

    2016-01-01

    Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.

  3. Estimation of potential scour at bridges on local government roads in South Dakota, 2009-12

    USGS Publications Warehouse

    Thompson, Ryan F.; Wattier, Chelsea M.; Liggett, Richard R.; Truax, Ryan A.

    2014-01-01

    In 2009, the U.S. Geological Survey and South Dakota Department of Transportation (SDDOT) began a study to estimate potential scour at selected bridges on local government (county, township, and municipal) roads in South Dakota. A rapid scour-estimation method (level-1.5) and a more detailed method (level-2) were used to develop estimates of contraction, abutment, and pier scour. Data from 41 level-2 analyses completed for this study were combined with data from level-2 analyses completed in previous studies to develop new South Dakota-specific regression equations: four regional equations for main-channel velocity at the bridge contraction to account for the widely varying stream conditions within South Dakota, and one equation for head change. Velocity data from streamgages also were used in the regression for average velocity through the bridge contraction. Using these new regression equations, scour analyses were completed using the level-1.5 method on 361 bridges on local government roads. Typically, level-1.5 analyses are completed at flows estimated to have annual exceedance probabilities of 1 percent (100-year flood) and 0.2 percent (500-year flood); however, at some sites the bridge would not pass these flows. A level-1.5 analysis was then completed at the flow expected to produce the maximum scour. Data presented for level-1.5 scour analyses at the 361 bridges include contraction, abutment, and pier scour. Estimates of potential contraction scour ranged from 0 to 32.5 feet for the various flows evaluated. Estimated potential abutment scour ranged from 0 to 40.9 feet for left abutments, and from 0 to 37.7 feet for right abutments. Pier scour values ranged from 2.7 to 31.6 feet. The scour depth estimates provided in this report can be used by the SDDOT to compare with foundation depths at each bridge to determine if abutments or piers are at risk of being undermined by scour at the flows evaluated. Replicate analyses were completed at 24 of the 361 bridges to provide quality-assurance/quality-control measures for the level-1.5 scour estimates. An attempt was made to use the same flows among replicate analyses. Scour estimates do not necessarily have to be in numerical agreement to give the same results. For example, if contraction scour replicate analyses are 18.8 and 30.8 feet, both scour depths can indicate susceptibility to scour for which countermeasures may be needed, even though one number is much greater than the other number. Contraction scour has perhaps the greatest potential for being estimated differently in replicate visits. For contraction scour estimates at the various flows analyzed, differences between results ranged from -7.8 to 5.5 feet, with a median difference of 0.4 foot and an average difference of 0.2 foot. Abutment scour appeared to be nearly as reproducible as contraction scour. For abutment scour estimates at the varying flows analyzed, differences between results ranged from -17.4 to 11 feet, with a median difference of 1.4 feet and an average difference of 1.7 feet. Estimates of pier scour tended to be the most consistently reproduced in replicate visits, with differences between results ranging from -0.3 to 0.5 foot, with a median difference of 0.0 foot and an average difference of 0.0 foot. The U.S. Army Corps of Engineers Hydraulics Engineering Center River Analysis Systems (HEC-RAS) software package was used to model stream hydraulics at the 41 sites with level-2 analyses. Level-1.5 analyses also were completed at these sites, and the performance of the level-1.5 method was assessed by comparing results to those from the more rigorous level-2 method. The envelope curve approach used in the level-1.5 method is designed to overestimate scour relative to the estimate from the level-2 scour analysis. In cases where the level-1.5 method estimated less scour than the level-2 method, the amount of underestimation generally was less than 3 feet. The level-1.5 method generally overestimated contraction, abutment, and pier scour relative to the level-2 method, as intended. Although the level-1.5 method is designed to overestimate scour relative to more involved analysis methods, many assumptions, uncertainties, and estimations are involved. If the envelope curves are adjusted such that the level-1.5 method never underestimates scour relative to the level-2 method, an accompanying result may be excessive overestimation.

  4. Estimation of groundwater recharge via deuterium labelling in the semi-arid Cuvelai-Etosha Basin, Namibia.

    PubMed

    Beyer, Matthias; Gaj, Marcel; Hamutoko, Josefina Tulimeveva; Koeniger, Paul; Wanke, Heike; Himmelsbach, Thomas

    2015-01-01

    The stable water isotope deuterium ((2)H) was applied as an artificial tracer ((2)H2O) in order to estimate groundwater recharge through the unsaturated zone and describe soil water movement in a semi-arid region of northern central Namibia. A particular focus of this study was to assess the spatiotemporal persistence of the tracer when applied in the field on a small scale under extreme climatic conditions and to propose a method to obtain estimates of recharge in data-scarce regions. At two natural sites that differ in vegetation cover, soil and geology, 500 ml of a 70% (2)H2O solution was irrigated onto water saturated plots. The displacement of the (2)H peak was analyzed 1 and 10 days after an artificial rain event of 20 mm as well as after the rainy season. Results show that it is possible to apply the peak displacement method for the estimation of groundwater recharge rates in semi-arid environments via deuterium labelling. Potential recharge for the rainy season 2013/2014 was calculated as 45 mm a(-1) at 5.6 m depth and 40 mm a(-1) at 0.9 m depth at the two studied sites, respectively. Under saturated conditions, the artificial rain events moved 2.1 and 0.5 m downwards, respectively. The tracer at the deep sand site (site 1) was found after the rainy season at 5.6 m depth, corresponding to a displacement of 3.2 m. This equals in an average travel velocity of 2.8 cm d(-1) during the rainy season at the first site. At the second location, the tracer peak was discovered at 0.9 m depth; displacement was found to be only 0.4 m equalling an average movement of 0.2 cm d(-1) through the unsaturated zone due to an underlying calcrete formation. Tracer recovery after one rainy season was found to be as low as 3.6% at site 1 and 1.9% at site 2. With an in situ measuring technique, a three-dimensional distribution of (2)H after the rainy season could be measured and visualized. This study comprises the first application of the peak displacement method using a deuterium labelling technique for the estimation of groundwater recharge in semi-arid regions. Deuterium proved to be a suitable tracer for studies within the soil-vegetation-atmosphere interface. The results of this study are relevant for the design of labelling experiments in the unsaturated zone of dry areas using (2)H2O as a tracer and obtaining estimations of groundwater recharge on a local scale. The presented methodology is particularly beneficial in data-scarce environments, where recharge pathways and mechanisms are poorly understood.

  5. Validation of Pooled Whole-Genome Re-Sequencing in Arabidopsis lyrata.

    PubMed

    Fracassetti, Marco; Griffin, Philippa C; Willi, Yvonne

    2015-01-01

    Sequencing pooled DNA of multiple individuals from a population instead of sequencing individuals separately has become popular due to its cost-effectiveness and simple wet-lab protocol, although some criticism of this approach remains. Here we validated a protocol for pooled whole-genome re-sequencing (Pool-seq) of Arabidopsis lyrata libraries prepared with low amounts of DNA (1.6 ng per individual). The validation was based on comparing single nucleotide polymorphism (SNP) frequencies obtained by pooling with those obtained by individual-based Genotyping By Sequencing (GBS). Furthermore, we investigated the effect of sample number, sequencing depth per individual and variant caller on population SNP frequency estimates. For Pool-seq data, we compared frequency estimates from two SNP callers, VarScan and Snape; the former employs a frequentist SNP calling approach while the latter uses a Bayesian approach. Results revealed concordance correlation coefficients well above 0.8, confirming that Pool-seq is a valid method for acquiring population-level SNP frequency data. Higher accuracy was achieved by pooling more samples (25 compared to 14) and working with higher sequencing depth (4.1× per individual compared to 1.4× per individual), which increased the concordance correlation coefficient to 0.955. The Bayesian-based SNP caller produced somewhat higher concordance correlation coefficients, particularly at low sequencing depth. We recommend pooling at least 25 individuals combined with sequencing at a depth of 100× to produce satisfactory frequency estimates for common SNPs (minor allele frequency above 0.05).

  6. Comparison between deterministic and statistical wavelet estimation methods through predictive deconvolution: Seismic to well tie example from the North Sea

    NASA Astrophysics Data System (ADS)

    de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode

    2017-01-01

    Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.

  7. Wavelet analysis of poorly-focused ultrasonic signal of pressure tube inspection in nuclear industry

    NASA Astrophysics Data System (ADS)

    Zhao, Huan; Gachagan, Anthony; Dobie, Gordon; Lardner, Timothy

    2018-04-01

    Pressure tube fabrication and installment challenges combined with natural sagging over time can produce issues with probe alignment for pressure tube inspection of the primary circuit of CANDU reactors. The ability to extract accurate defect depth information from poorly focused ultrasonic signals would reduce additional inspection procedures, which leads to a significant time and cost saving. Currently, the defect depth measurement protocol is to simply calculate the time difference between the peaks of the echo signals from the tube surface and the defect from a single element probe focused at the back-wall depth. When alignment issues are present, incorrect focusing results in interference within the returning echo signal. This paper proposes a novel wavelet analysis method that employs the Haar wavelet to decompose the original poorly focused A-scan signal and reconstruct detailed information based on a selected high frequency component range within the bandwidth of the transducer. Compared to the original signal, the wavelet analysis method provides additional characteristic defect information and an improved estimate of defect depth with errors less than 5%.

  8. Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao

    2017-11-01

    Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.

  9. Benchmark calculations with correlated molecular wavefunctions. XIII. Potential energy curves for He2, Ne2 and Ar2 using correlation consistent basis sets through augmented sextuple zeta

    NASA Astrophysics Data System (ADS)

    van Mourik, Tanja

    1999-02-01

    The potential energy curves of the rare gas dimers He2, Ne2, and Ar2 have been computed using correlation consistent basis sets ranging from singly augmented aug-cc-pVDZ sets through triply augmented t-aug-cc-pV6Z sets, with the augmented sextuple basis sets being reported herein. Several methods for including electron correlation were investigated, namely Moller-Plesset perturbation theory (MP2, MP3 and MP4) and coupled cluster theory [CCSD and CCSD(T)]. For He2CCSD(T)/d-aug-cc-pV6Z calculations yield a well depth of 7.35cm-1 (10.58K), with an estimated complete basis set (CBS) limit of 7.40cm-1 (10.65K). The latter is smaller than the 'exact' well depth (Aziz, R. A., Janzen, A. R., and Moldover, M. R., 1995, Phys. Rev. Lett., 74, 1586) by about 0.2cm-1 (0.35K). The Ne well depth, computed with the CCSD(T)/d-aug-cc-pV6Z method, is 28.31cm-1 and the estimated CBS limit is 28.4cm-1, approximately 1cm-1 smaller than the empirical potential of Aziz, R. A., and Slaman, M., J., 1989, Chem. Phys., 130, 187. Inclusion of core and core-valence correlation effects has a negligible effect on the Ne well depth, decreasing it by only 0.04cm-1. For Ar2, CCSD(T)/ d-aug-cc-pV6Z calculations yield a well depth of 96.2cm-1. The corresponding HFDID potential of Aziz, R. A., 1993, J. chem. Phys., 99, 4518 predicts of D of 99.7cm-1. Inclusion of core and core-valence effects in Ar increases the well depth and decreases the discrepancy by approximately 1cm-1.

  10. A comparison of hydrographically and optically derived mixed layer depths

    USGS Publications Warehouse

    Zawada, D.G.; Zaneveld, J.R.V.; Boss, E.; Gardner, W.D.; Richardson, M.J.; Mishonov, A.V.

    2005-01-01

    Efforts to understand and model the dynamics of the upper ocean would be significantly advanced given the ability to rapidly determine mixed layer depths (MLDs) over large regions. Remote sensing technologies are an ideal choice for achieving this goal. This study addresses the feasibility of estimating MLDs from optical properties. These properties are strongly influenced by suspended particle concentrations, which generally reach a maximum at pycnoclines. The premise therefore is to use a gradient in beam attenuation at 660 nm (c660) as a proxy for the depth of a particle-scattering layer. Using a global data set collected during World Ocean Circulation Experiment cruises from 1988-1997, six algorithms were employed to compute MLDs from either density or temperature profiles. Given the absence of published optically based MLD algorithms, two new methods were developed that use c660 profiles to estimate the MLD. Intercomparison of the six hydrographically based algorithms revealed some significant disparities among the resulting MLD values. Comparisons between the hydrographical and optical approaches indicated a first-order agreement between the MLDs based on the depths of gradient maxima for density and c660. When comparing various hydrographically based algorithms, other investigators reported that inherent fluctuations of the mixed layer depth limit the accuracy of its determination to 20 m. Using this benchmark, we found a ???70% agreement between the best hydrographical-optical algorithm pairings. Copyright 2005 by the American Geophysical Union.

  11. [Habitat suitability index model and minimum habitat area estimation of young Procypris rabaudi (Tchang): a simulation experiment in laboratory].

    PubMed

    Feng, Xian-Bin; Zhu, Yong-Jiu; Li, Xi; He, Yong-Feng; Zhao, Jian-Hua; Yang, De-Guo

    2013-01-01

    Under the conditions of simulated micro-habitat in laboratory, and by using experimental ecological methods, this paper evaluated the suitability index (HSI) of young Procypris rabaudi for habitat factors (substrate, light intensity and water depth). The habitat suitability models of the young P. rabaudi were established, and the minimum habitat area of the young P. rabaudi was estimated. The young P. rabaudi preferred the habitats with the gravel diameter from 10 to 15 cm, light intensity from 0.2 to 1.8 lx, and water depth from 0 to 15 cm (distance from the bottom of the tank). The three suitability index models of the substrate, light intensity and water depth for the young P. rabaudi were SI(s) = 1.7338e(-0.997x)(SI(S) is the suitability index of substrate, and x is the gravel diameter; R2 = 0.89, P < 0.01), SI(L) = 3.0121e(-1.339x)(SI(L) is the suitability index of light intensity, and x is the light intensity; R2 = 0.93, P < 0.01), and SI(W) = 2.4055e(-1.245x)(SI(W) is the suitability index of water depth, and x is the water depth; R2 = 0.97, P < 0.01), respectively. Arithmetic mean model HSI = (SI(S)+SI(L)+SI(W))/3 was most available for the estimation of the habitat suitability of young P. rabaudi. A total of seven groups of young P. rabaudi which established and maintained a relatively stable habitat area range were found. This habitat area ranged from 628 to 2015 cm2, with an average of 1114 cm2.

  12. Hydrogeological bedrock inferred from electrical resistivity model in Taichung Basin, Taiwan

    NASA Astrophysics Data System (ADS)

    Chiang, C. W.; Chang, P. Y.; Chang, L. C.

    2015-12-01

    The four-year project of the study of groundwater hydrogeology and recharge model was indicated by Central Geological Survey, MOEA, Taiwan (R.O.C.) to evaluate recharge groundwater areas in Taiwan where included Taipei, Taichung Basins, Lanyang and Chianan Plains. The groundwater recharge models of Lanyang Plain and Taipei Basin have successfully been estimated in two years ago (2013-2014). The third year of the project integrates with geophysical, geochemistry, and hydrogeology models to estimate the groundwater recharge model in Taichung Basin region. Taichung Basin is mainly covered by Pre-Pleistocene of thick gravel, sandy and muddy sediment rocks within a joint alluvial fan, whereas the depth of the hydrological bedrock remains uncertain. Two electrical resistivity geophysical tools were carried out utilizing direct current resistivity and audio-magnetotelluric (AMT) explorations, which could ideally provide the depth resolutions from shallow to depth for evaluating the groundwater resources. The study has carried out 21 AMT stations in the southern Taichung Basin in order to delineate hydrological bedrock in the region. All the AMT stations were deployed about 24 hours and processed with remote reference technique to reduce culture noises. The quality of most stations shows acceptable in the area which two stations were excluded due to near-field source effect in the southwestern basin. The best depth resolution is identified in 500 meters for the model. The preliminary result shows that the depths of the bedrock gradually changes from southern ~20 m toward to ~400 m in central, and eastern ~20 m to 180 m in the western basin inferred from the AMT model. The investigation shows that AMT method could be a useful geophysical tool to enhance the groundwater recharge model estimation without dense loggings in the region.

  13. Hydrogeologic and hydraulic characterization of aquifer and nonaquifer layers in a lateritic terrain (West Bengal, India)

    NASA Astrophysics Data System (ADS)

    Biswal, Sabinaya; Jha, Madan K.; Sharma, Shashi P.

    2018-02-01

    The hydrogeologic and hydraulic characteristics of a lateritic terrain in West Bengal, India, were investigated. Test drilling was conducted at ten sites and grain-size distribution curves (GSDCs) were prepared for 275 geologic samples. Performance evaluation of eight grain-size-analysis (GSA) methods was carried out to estimate the hydraulic conductivity (K) of subsurface formations. Finally, the GSA results were validated against pumping-test data. The GSDCs indicated that shallow aquifer layers are coarser than the deeper aquifer layers (uniformity coefficient 0.19-11.4). Stratigraphy analysis revealed that both shallow and deep aquifers of varying thickness exist at depths 9-40 and 40-79 m, respectively. The mean K estimates by the GSA methods are 3.62-292.86 m/day for shallow aquifer layers and 0.97-209.93 m/day for the deeper aquifer layers, suggesting significant aquifer heterogeneity. Pumping-test data indicated that the deeper aquifers are leaky confined with transmissivity 122.69-693.79 m2/day, storage coefficient 1.01 × 10-7-2.13 × 10-4 and leakance 2.01 × 10-7-34.56 × 10-2 day-1. Although the K values yielded by the GSA methods are generally larger than those obtained from the pumping tests, the Slichter, Harleman and US Bureau Reclamation (USBR) GSA methods yielded reasonable values at most of the sites (1-3 times higher than K estimates by the pumping-test method). In conclusion, more reliable aquifers exist at deeper depths that can be tapped for dependable water supply. GSA methods such as Slichter, Harleman and USBR can be used for the preliminary assessment of K in lateritic terrains in the absence of reliable field methods.

  14. Global root zone storage capacity from satellite-based evaporation data

    NASA Astrophysics Data System (ADS)

    Wang-Erlandsson, Lan; Bastiaanssen, Wim; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel; van Dijk, Albert; Guerschman, Juan; Keys, Patrick; Gordon, Line; Savenije, Hubert

    2016-04-01

    We present an "earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale-independent. In contrast to traditional look-up table approaches, our method captures the variability in root zone storage capacity within land cover type, including in rainforests where direct measurements of root depth otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. We find that evergreen forests are able to create a large storage to buffer for extreme droughts (with a return period of up to 60 years), in contrast to short vegetation and crops (which seem to adapt to a drought return period of about 2 years). The presented method to estimate root zone storage capacity eliminates the need for soils and rooting depth information, which could be a game-changer in global land surface modelling.

  15. Extreme precipitation depths for Texas, excluding the Trans-Pecos region

    USGS Publications Warehouse

    Lanning-Rush, Jennifer; Asquith, William H.; Slade, Raymond M.

    1998-01-01

    Storm durations of 1, 2, 3, 4, 5, and 6 days were investigated for this report. The extreme precipitation depth for a particular area is estimated from an “extreme precipitation curve” (an upper limit or envelope curve developed from graphs of extreme precipitation depths for each climatic region). The extreme precipitation curves were determined using precipitation depth-duration information from a subset (24 “extreme” storms) of 213 “notable” storms documented throughout Texas. The extreme precipitation curves can be used to estimate extreme precipitation depth for a particular area. The extreme precipitation depth represents a limiting depth, which can provide useful comparative information for more quantitative analyses.

  16. Parameterization of clear-sky surface irradiance and its implications for estimation of aerosol direct radiative effect and aerosol optical depth

    PubMed Central

    Xia, Xiangao

    2015-01-01

    Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310

  17. Shear Wave Velocity, Depth to Bedrock, and Fundamental Resonance Applied to Bedrock Mapping using MASW and H/V Analysis

    NASA Astrophysics Data System (ADS)

    Gonsiewski, J.

    2015-12-01

    Mapping bedrock depth is useful for earthquake hazard analysis, subsurface water transport, and other applications. Recently, collaborative experimentation provided an opportunity to explore a mapping method. Near surface glacial till shear wave velocity (Vs) where data is available from an array of 3-component seismometers were studied for this experiment. Vs is related to depth to bedrock (h) and fundamental resonance (Fo); Fo = Vs/(4h). The H/V spectral peak frequency of recordings from a 3-component seismometer yields a fundamental resonance estimate. Where a suitable average Vs is established, the depth to bedrock can be calculated at every seismometer. 3-component seismometer data was provided by Spectraseis. Geophones, seismographs, and an extra 3-component seismometer were provided by Wright State University for this study. For Vs analysis, three MASW surveys were conducted near the seismometer array. SurfSeis3© was used for processing MASW data. Overtones from complicated bedrock structure and great bedrock depth are improved by combining overtones from multiple source offsets from each survey. From MASW Vs and depth to bedrock results, theoretical fundamental resonance (Fo) was calculated and compared with the H/V peak spectral frequency measured by a seismometer at selected sites and processed by Geopsy processing software. Calculated bedrock depths from all geophysical data were compared with measured bedrock depths at nearby water wells and oil and gas wells provided by ODNR. Vs and depth to bedrock results from MASW produced similar calculated fundamental resonances to the H/V approximations by respective seismometers. Bedrock mapping was performed upon verifying the correlation between the theoretical fundamental resonance and H/V peak frequencies. Contour maps were generated using ArcGIS®. Contour lines interpolated from local wells were compared with the depths calculated from H/V analysis. Bedrock depths calculated from the seismometer array correlate with the major trends indicated by the surrounding wells. A final contour map was developed from depth to bedrock measured by all wells and depths calculated from the average Vs and estimated resonance at select Spectraseis 3-component seismometers.

  18. Evaluation of a depth sensor for weights estimation of growing and finishing pigs

    USDA-ARS?s Scientific Manuscript database

    A method of continuously monitoring animal weight would aid producers by ensuring all pigs are gaining weight and would increase the precision of marketing pigs. Electronically monitoring weight without moving the pigs to the scale would eliminate a source of stress. Therefore, the development of me...

  19. Application of the Spatial Auto-Correlation Method for Shear-Wave Velocity Studies Using Ambient Noise

    NASA Astrophysics Data System (ADS)

    Asten, M. W.; Hayashi, K.

    2018-07-01

    Ambient seismic noise or microtremor observations used in spatial auto-correlation (SPAC) array methods consist of a wide frequency range of surface waves from the frequency of about 0.1 Hz to several tens of Hz. The wavelengths (and hence depth sensitivity of such surface waves) allow determination of the site S-wave velocity model from a depth of 1 or 2 m down to a maximum of several kilometres; it is a passive seismic method using only ambient noise as the energy source. Application usually uses a 2D seismic array with a small number of seismometers (generally between 2 and 15) to estimate the phase velocity dispersion curve and hence the S-wave velocity depth profile for the site. A large number of methods have been proposed and used to estimate the dispersion curve; SPAC is the one of the oldest and the most commonly used methods due to its versatility and minimal instrumentation requirements. We show that direct fitting of observed and model SPAC spectra generally gives a superior bandwidth of useable data than does the more common approach of inversion after the intermediate step of constructing an observed dispersion curve. Current case histories demonstrate the method with a range of array types including two-station arrays, L-shaped multi-station arrays, triangular and circular arrays. Array sizes from a few metres to several-km in diameter have been successfully deployed in sites ranging from downtown urban settings to rural and remote desert sites. A fundamental requirement of the method is the ability to average wave propagation over a range of azimuths; this can be achieved with either or both of the wave sources being widely distributed in azimuth, and the use of a 2D array sampling the wave field over a range of azimuths. Several variants of the method extend its applicability to under-sampled data from sparse arrays, the complexity of multiple-mode propagation of energy, and the problem of precise estimation where array geometry departs from an ideal regular array. We find that sparse nested triangular arrays are generally sufficient, and the use of high-density circular arrays is unlikely to be cost-effective in routine applications. We recommend that passive seismic arrays should be the method of first choice when characterizing average S-wave velocity to a depth of 30 m ( V s30) and deeper, with active seismic methods such as multichannel analysis of surface waves (MASW) being a complementary method for use if and when conditions so require. The use of computer inversion methodology allows estimation of not only the S-wave velocity profile but also parameter uncertainties in terms of layer thickness and velocity. The coupling of SPAC methods with horizontal/vertical particle motion spectral ratio analysis generally allows use of lower frequency data, with consequent resolution of deeper layers than is possible with SPAC alone. Considering its non-invasive methodology, logistical flexibility, simplicity, applicability, and stability, the SPAC method and its various modified extensions will play an increasingly important role in site effect evaluation. The paper summarizes the fundamental theory of the SPAC method, reviews recent developments, and offers recommendations for future blind studies.

  20. Application of the Spatial Auto-Correlation Method for Shear-Wave Velocity Studies Using Ambient Noise

    NASA Astrophysics Data System (ADS)

    Asten, M. W.; Hayashi, K.

    2018-05-01

    Ambient seismic noise or microtremor observations used in spatial auto-correlation (SPAC) array methods consist of a wide frequency range of surface waves from the frequency of about 0.1 Hz to several tens of Hz. The wavelengths (and hence depth sensitivity of such surface waves) allow determination of the site S-wave velocity model from a depth of 1 or 2 m down to a maximum of several kilometres; it is a passive seismic method using only ambient noise as the energy source. Application usually uses a 2D seismic array with a small number of seismometers (generally between 2 and 15) to estimate the phase velocity dispersion curve and hence the S-wave velocity depth profile for the site. A large number of methods have been proposed and used to estimate the dispersion curve; SPAC is the one of the oldest and the most commonly used methods due to its versatility and minimal instrumentation requirements. We show that direct fitting of observed and model SPAC spectra generally gives a superior bandwidth of useable data than does the more common approach of inversion after the intermediate step of constructing an observed dispersion curve. Current case histories demonstrate the method with a range of array types including two-station arrays, L-shaped multi-station arrays, triangular and circular arrays. Array sizes from a few metres to several-km in diameter have been successfully deployed in sites ranging from downtown urban settings to rural and remote desert sites. A fundamental requirement of the method is the ability to average wave propagation over a range of azimuths; this can be achieved with either or both of the wave sources being widely distributed in azimuth, and the use of a 2D array sampling the wave field over a range of azimuths. Several variants of the method extend its applicability to under-sampled data from sparse arrays, the complexity of multiple-mode propagation of energy, and the problem of precise estimation where array geometry departs from an ideal regular array. We find that sparse nested triangular arrays are generally sufficient, and the use of high-density circular arrays is unlikely to be cost-effective in routine applications. We recommend that passive seismic arrays should be the method of first choice when characterizing average S-wave velocity to a depth of 30 m (V s30) and deeper, with active seismic methods such as multichannel analysis of surface waves (MASW) being a complementary method for use if and when conditions so require. The use of computer inversion methodology allows estimation of not only the S-wave velocity profile but also parameter uncertainties in terms of layer thickness and velocity. The coupling of SPAC methods with horizontal/vertical particle motion spectral ratio analysis generally allows use of lower frequency data, with consequent resolution of deeper layers than is possible with SPAC alone. Considering its non-invasive methodology, logistical flexibility, simplicity, applicability, and stability, the SPAC method and its various modified extensions will play an increasingly important role in site effect evaluation. The paper summarizes the fundamental theory of the SPAC method, reviews recent developments, and offers recommendations for future blind studies.

  1. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  2. Estimation of the intrinsic absorption and scattering attenuation in Northeastern Venezuela (Southeastern Caribbean) using coda waves

    USGS Publications Warehouse

    Ugalde, A.; Pujades, L.G.; Canas, J.A.; Villasenor, A.

    1998-01-01

    Northeastern Venezuela has been studied in terms of coda wave attenuation using seismograms from local earthquakes recorded by a temporary short-period seismic network. The studied area has been separated into two subregions in order to investigate lateral variations in the attenuation parameters. Coda-Q-1 (Q(c)-1) has been obtained using the single-scattering theory. The contribution of the intrinsic absorption (Q(i)-1) and scattering (Q(s)-1) to total attenuation (Q(t)-1) has been estimated by means of a multiple lapse time window method, based on the hypothesis of multiple isotropic scattering with uniform distribution of scatterers. Results show significant spatial variations of attenuation: the estimates for intermediate depth events and for shallow events present major differences. This fact may be related to different tectonic characteristics that may be due to the presence of the Lesser Antilles subduction zone, because the intermediate depth seismic zone may be coincident with the southern continuation of the subducting slab under the arc.

  3. Isotopenhydrologische Methoden (2H, 18O) zur Bestimmung der Grundwasserneubildung in Trockengebieten: Potenzial und Grenzen

    NASA Astrophysics Data System (ADS)

    Beyer, Matthias; Gaj, Marcel; Königer, Paul; Tulimeveva Hamutoko, Josefina; Wanke, Heike; Wallner, Markus; Himmelsbach, Thomas

    2018-03-01

    The estimation of groundwater recharge in water-limited environments is challenging due to climatic conditions, the occurrence of deep unsaturated zones, and specialized vegetation. We critically examined two methods based on stable isotopes of soil water: (i) the interpretation of natural isotope depth-profiles and subsequent approximation of recharge using empirical relationships and (ii) the use of deuterium-enriched water (2H2O) as tracer. Numerous depth-profiles were measured directly in the field in semiarid Namibia using a novel in-situ technique. Additionally, 2H2O was injected into the soil and its displacement over a complete rainy season monitored. Estimated recharge ranges between 0 and 29 mm/y for three rainy seasons experiencing seasonal rainfall of 660 mm (2013/14), 313 mm (2014/15) and 535 mm (2015/16). The results of this study fortify the suitability of water stable isotope-based approaches for recharge estimation and highlight enormous potential for future studies of water vapor transport and ecohydrological processes.

  4. The importance of hyporheic sediment respiration in several mid-order Michigan rivers: Comparison between methods in estimates of lotic metabolism

    USGS Publications Warehouse

    Uzarski, D.G.; Stricker, C.A.; Burton, T.M.; King, D. K.; Steinman, A.D.

    2004-01-01

    Metabolism was measured in four Michigan streams, comparing estimates made using a flow-through chamber designed to include the hyporheic zone to a 20 cm depth and a traditional closed chamber that enclosed to a 5 cm depth. Mean levels of gross primary productivity and community respiration were consistently greater in the flow-through chamber than the closed chamber in all streams. Ratios of productivity to respiration (P/R) were consistently greater in the closed chambers than the flow-through chambers. P/R ratios were consistently <1 in all streams when estimated with flow-through chambers, suggesting heterotrophic conditions. Maintenance of stream ecosystem structure and function therefore is dependent on subsidies either from the adjacent terrestrial system or upstream sources. Our results suggest that stream metabolism studies that rely on extrapolation of closed chambers to the whole reach will most likely underestimate gross primary productivity and community respiration.

  5. Mapping the spatial distribution and time evolution of snow water equivalent with passive microwave measurements

    USGS Publications Warehouse

    Guo, J.; Tsang, L.; Josberger, E.G.; Wood, A.W.; Hwang, J.-N.; Lettenmaier, D.P.

    2003-01-01

    This paper presents an algorithm that estimates the spatial distribution and temporal evolution of snow water equivalent and snow depth based on passive remote sensing measurements. It combines the inversion of passive microwave remote sensing measurements via dense media radiative transfer modeling results with snow accumulation and melt model predictions to yield improved estimates of snow depth and snow water equivalent, at a pixel resolution of 5 arc-min. In the inversion, snow grain size evolution is constrained based on pattern matching by using the local snow temperature history. This algorithm is applied to produce spatial snow maps of Upper Rio Grande River basin in Colorado. The simulation results are compared with that of the snow accumulation and melt model and a linear regression method. The quantitative comparison with the ground truth measurements from four Snowpack Telemetry (SNOTEL) sites in the basin shows that this algorithm is able to improve the estimation of snow parameters.

  6. Solutions for the diurnally forced advection-diffusion equation to estimate bulk fluid velocity and diffusivity in streambeds from temperature time series

    NASA Astrophysics Data System (ADS)

    Luce, C.; Tonina, D.; Gariglio, F. P.; Applebee, R.

    2012-12-01

    Differences in the diurnal variations of temperature at different depths in streambed sediments are commonly used for estimating vertical fluxes of water in the streambed. We applied spatial and temporal rescaling of the advection-diffusion equation to derive two new relationships that greatly extend the kinds of information that can be derived from streambed temperature measurements. The first equation provides a direct estimate of the Peclet number from the amplitude decay and phase delay information. The analytical equation is explicit (e.g. no numerical root-finding is necessary), and invertable. The thermal front velocity can be estimated from the Peclet number when the thermal diffusivity is known. The second equation allows for an independent estimate of the thermal diffusivity directly from the amplitude decay and phase delay information. Several improvements are available with the new information. The first equation uses a ratio of the amplitude decay and phase delay information; thus Peclet number calculations are independent of depth. The explicit form also makes it somewhat faster and easier to calculate estimates from a large number of sensors or multiple positions along one sensor. Where current practice requires a priori estimation of streambed thermal diffusivity, the new approach allows an independent calculation, improving precision of estimates. Furthermore, when many measurements are made over space and time, expectations of the spatial correlation and temporal invariance of thermal diffusivity are valuable for validation of measurements. Finally, the closed-form explicit solution allows for direct calculation of propagation of uncertainties in error measurements and parameter estimates, providing insight about error expectations for sensors placed at different depths in different environments as a function of surface temperature variation amplitudes. The improvements are expected to increase the utility of temperature measurement methods for studying groundwater-surface water interactions across space and time scales. We discuss the theoretical implications of the new solutions supported by examples with data for illustration and validation.

  7. A New Method of Stress Measurement Based upon Elastic Deformation of Core Sample with Stress Relief by Drilling

    NASA Astrophysics Data System (ADS)

    Ito, T.; Funato, A.; Tamagawa, T.; Tezuka, K.; Yabe, Y.; Abe, S.; Ishida, A.; Ogasawara, H.

    2017-12-01

    When rock is cored at depth by drilling, anisotropic expansion occurs with the relief of anisotropic rock stresses, resulting in a sinusoidal variation of core diameter with a period of 180 deg. in the core roll angle. The circumferential variation of core diameter is given theoretically as a function of rock stresses. These new findings can lead various ideas to estimate the rock stress from circumferential variation of core diameter measured after the core retrieving. In the simplest case when a single core sample is only available, the difference between the maximum and minimum components of rock stress in a plane perpendicular to the drilled hole can be estimated from the maximum and minimum core diameters (see the detail in, Funato and Ito, IJRMMS, 2017). The advantages of this method include, (i) much easier measurement operation than those in other in-situ or in-lab estimation methods, and (ii) applicability in high stress environment where stress measurements need pressure for packers or pumping system for the hydro-fracturing methods higher than their tolerance levels. We have successfully tested the method at deep seismogenic zones in South African gold mines, and we are going to apply it to boreholes collared at 3 km depth and intersecting a M5.5 rupture plane several hundred meters below the mine workings in the ICDP project of "Drilling into Seismogenic zones of M2.0 - M5.5 earthquakes in deep South African gold mines" (DSeis) (e.g., http://www.icdp-online.org/projects/world/africa/orkney-s-africa/details/). If several core samples with different orientation are available, all of three principal components of 3D rock stress can be estimated. To realize this, we should have several boreholes drilled in different directions in a rock mass where the stress field is considered to be uniform. It is commonly carried out to dill boreholes in different directions from a mine gallery. Even in a deep borehole drilled vertically from the ground surface, the downhole tool of rotary sidewall coring allows us to take core samples with different orientations at depths of interest from the sidewall of the vertically-drilled borehole. The theoretical relationship between the core expansion and rock stress has been verified through the examination of core samples prepared in laboratory experiments and retrieved field cores.

  8. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    NASA Astrophysics Data System (ADS)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  9. Rapid depth estimation for compact magnetic sources using a semi-automated spectrum-based method

    NASA Astrophysics Data System (ADS)

    Clifton, Roger

    2017-04-01

    This paper describes a spectrum-based algorithmic procedure for rapid reconnaissance for compact bodies at depths of interest using magnetic line data. The established method of obtaining depth to source from power spectra requires an interpreter to subjectively select just a single slope along the power spectrum. However, many slopes along the spectrum are, at least partially, indicative of the depth if the shape of the source is known. In particular, if the target is assumed to be a point dipole, all spectral slopes are determined by the depth, noise permitting. The concept of a `depth spectrum' is introduced, where the power spectrum in a travelling window or gate of data is remapped so that a single dipole in the gate would be represented as a straight line at its depth on the y-axis of the spectrum. In demonstration, the depths of two known ironstones are correctly displayed. When a second body is in the gate, the two anomalies interfere, leaving interference patterns on the depth spectra that are themselves diagnostic. A formula has been derived for the purpose. Because there is no need for manual selection of slopes along the spectrum, the process runs rapidly along flight lines with a continuously varying display, where the interpreter can pick out a persistent depth signal among the more rapidly varying noise. Interaction is nevertheless necessary, because the interpreter often needs to pass across an anomaly of interest several times, separating out interfering bodies, and resolving the slant range to the body from adjacent flight lines. Because a look-up table is used rather than a formula, the elementary structure used for the mapping can be adapted by including an extra dipole, possibly with a different inclination.

  10. Estimation of optimal nasotracheal tube depth in adult patients.

    PubMed

    Ji, Sung-Mi

    2017-12-01

    The aim of this study was to estimate the optimal depth of nasotracheal tube placement. We enrolled 110 patients scheduled to undergo oral and maxillofacial surgery, requiring nasotracheal intubation. After intubation, the depth of tube insertion was measured. The neck circumference and distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch were measured. To estimate optimal tube depth, correlation and regression analyses were performed using clinical and anthropometric parameters. The mean tube depth was 28.9 ± 1.3 cm in men (n = 62), and 26.6 ± 1.5 cm in women (n = 48). Tube depth significantly correlated with height (r = 0.735, P < 0.001). Distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch correlated with depth of the endotracheal tube (r = 0.363, r = 0.362, and r = 0.546, P < 0.05). The tube depth also correlated with the sum of these distances (r = 0.646, P < 0.001). We devised the following formula for estimating tube depth: 19.856 + 0.267 × sum of the three distances (R 2 = 0.432, P < 0.001). The optimal tube depth for nasotracheally intubated adult patients correlated with height and sum of the distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch. The proposed equation would be a useful guide to determine optimal nasotracheal tube placement.

  11. High temperature 1 MHz capacitance-voltage method for evaluation of border traps in 4H-SiC MOS system

    NASA Astrophysics Data System (ADS)

    Peng, Zhao-Yang; Wang, Sheng-Kai; Bai, Yun; Tang, Yi-Dan; Chen, Xi-Ming; Li, Cheng-Zhan; Liu, Ke-An; Liu, Xin-Yu

    2018-04-01

    In this work, border traps located in SiO2 at different depths in 4H-SiC MOS system are evaluated by a simple and effective method based on capacitance-voltage (C-V) measurements. This method estimates the border traps between two adjacent depths through C-V measurement at various frequencies at room and elevated temperatures. By comparison of these two C-V characteristics, the correlation between time constant of border traps and temperatures is obtained. Then the border trap density is determined by integration of capacitance difference against gate voltage at the regions where border traps dominate. The results reveal that border trap concentration a few nanometers away from the interface increases exponentially towards the interface, which is in good agreement with previous work. It has been proved that high temperature 1 MHz C-V method is effective for border trap evaluation.

  12. MODTOHAFSD — A GUI based JAVA code for gravity analysis of strike limited sedimentary basins by means of growing bodies with exponential density contrast-depth variation: A space domain approach

    NASA Astrophysics Data System (ADS)

    Chakravarthi, V.; Sastry, S. Rajeswara; Ramamma, B.

    2013-07-01

    Based on the principles of modeling and inversion, two interpretation methods are developed in the space domain along with a GUI based JAVA code, MODTOHAFSD, to analyze the gravity anomalies of strike limited sedimentary basins using a prescribed exponential density contrast-depth function. A stack of vertical prisms all having equal widths, but each one possesses its own limited strike length and thickness, describes the structure of a sedimentary basin above the basement complex. The thicknesses of prisms represent the depths to the basement and are the unknown parameters to be estimated from the observed gravity anomalies. Forward modeling is realized in the space domain using a combination of analytical and numerical approaches. The algorithm estimates the initial depths of a sedimentary basin and improves them, iteratively, based on the differences between the observed and modeled gravity anomalies within the specified convergence criteria. The present code, works on Model-View-Controller (MVC) pattern, reads the Bouguer gravity anomalies, constructs/modifies regional gravity background in an interactive approach, estimates residual gravity anomalies and performs automatic modeling or inversion based on user specification for basement topography. Besides generating output in both ASCII and graphical forms, the code displays (i) the changes in the depth structure, (ii) nature of fit between the observed and modeled gravity anomalies, (iii) changes in misfit, and (iv) variation of density contrast with iteration in animated forms. The code is used to analyze both synthetic and real field gravity anomalies. The proposed technique yielded information that is consistent with the assumed parameters in case of synthetic structure and with available drilling depths in case of field example. The advantage of the code is that it can be used to analyze the gravity anomalies of sedimentary basins even when the profile along which the interpretation is intended fails to bisect the strike length.

  13. Estimated depth to the water table and estimated rate of recharge in outcrops of the Chicot and Evangeline aquifers near Houston, Texas

    USGS Publications Warehouse

    Noble, J.E.; Bush, P.W.; Kasmarek, M.C.; Barbie, D.L.

    1996-01-01

    In 1989, the U.S. Geological Survey, in cooperation with the Harris-Galveston Coastal Subsidence District, began a field study to determine the depth to the water table and to estimate the rate of recharge in outcrops of the Chicot and Evangeline aquifers near Houston, Texas. The study area comprises about 2,000 square miles of outcrops of the Chicot and Evangeline aquifers in northwest Harris County, Montgomery County, and southern Walker County. Because of the scarcity of measurable water-table wells, depth to the water table below land surface was estimated using a surface geophysical technique, seismic refraction. The water table in the study area generally ranges from about 10 to 30 foot below land surface and typically is deeper in areas of relatively high land-surface altitude than in areas of relatively low land- surface altitude. The water table has demonstrated no long-term trends since ground-water development began, with the probable exception of the water table in the Katy area: There the water table is more than 75 feet deep, probably due to ground-water pumpage from deeper zones. An estimated rate of recharge in the aquifer outcrops was computed using the interface method in which environmental tritium is a ground-water tracer. The estimated average total recharge rate in the study area is 6 inches per year. This rate is an upper bound on the average recharge rate during the 37 years 1953-90 because it is based on the deepest penetration (about 80 feet) of postnuclear-testing tritium concentrations. The rate, which represents one of several components of a complex regional hydrologic budget, is considered reasonable but is not definitive because of uncertainty regarding the assumptions and parameters used in its computation.

  14. Global distribution of plant-extractable water capacity of soil

    USGS Publications Warehouse

    Dunne, K.A.; Willmott, C.J.

    1996-01-01

    Plant-extractable water capacity of soil is the amount of water that can be extracted from the soil to fulfill evapotranspiration demands. It is often assumed to be spatially invariant in large-scale computations of the soil-water balance. Empirical evidence, however, suggests that this assumption is incorrect. In this paper, we estimate the global distribution of the plant-extractable water capacity of soil. A representative soil profile, characterized by horizon (layer) particle size data and thickness, was created for each soil unit mapped by FAO (Food and Agriculture Organization of the United Nations)/Unesco. Soil organic matter was estimated empirically from climate data. Plant rooting depths and ground coverages were obtained from a vegetation characteristic data set. At each 0.5?? ?? 0.5?? grid cell where vegetation is present, unit available water capacity (cm water per cm soil) was estimated from the sand, clay, and organic content of each profile horizon, and integrated over horizon thickness. Summation of the integrated values over the lesser of profile depth and root depth produced an estimate of the plant-extractable water capacity of soil. The global average of the estimated plant-extractable water capacities of soil is 8??6 cm (Greenland, Antarctica and bare soil areas excluded). Estimates are less than 5, 10 and 15 cm - over approximately 30, 60, and 89 per cent of the area, respectively. Estimates reflect the combined effects of soil texture, soil organic content, and plant root depth or profile depth. The most influential and uncertain parameter is the depth over which the plant-extractable water capacity of soil is computed, which is usually limited by root depth. Soil texture exerts a lesser, but still substantial, influence. Organic content, except where concentrations are very high, has relatively little effect.

  15. Development of direct dating methods of fault gouges: Deep drilling into Nojima Fault, Japan

    NASA Astrophysics Data System (ADS)

    Miyawaki, M.; Uchida, J. I.; Satsukawa, T.

    2017-12-01

    It is crucial to develop a direct dating method of fault gouges for the assessment of recent fault activity in terms of site evaluation for nuclear power plants. This method would be useful in regions without Late Pleistocene overlying sediments. In order to estimate the age of the latest fault slip event, it is necessary to use fault gouges which have experienced high frictional heating sufficient for age resetting. It is said that frictional heating is higher in deeper depths, because frictional heating generated by fault movement is determined depending on the shear stress. Therefore, we should determine the reliable depth of age resetting, as it is likely that fault gouges from the ground surface have been dated to be older than the actual age of the latest fault movement due to incomplete resetting. In this project, we target the Nojima fault which triggered the 1995 Kobe earthquake in Japan. Samples are collected from various depths (300-1,500m) by trenching and drilling to investigate age resetting conditions and depth using several methods including electron spin resonance (ESR) and optical stimulated luminescence (OSL), which are applicable to ages later than the Late Pleistocene. The preliminary results by the ESR method show approx. 1.1 Ma1) at the ground surface and 0.15-0.28 Ma2) at 388 m depth, respectively. These results indicate that samples from deeper depths preserve a younger age. In contrast, the OSL method dated approx. 2,200 yr1) at the ground surface. Although further consideration is still needed as there is a large margin of error, this result indicates that the age resetting depth of OSL is relatively shallow due to the high thermosensitivity of OSL compare to ESR. In the future, we plan to carry out further investigation for dating fault gouges from various depths up to approx. 1,500 m to verify the use of these direct dating methods.1) Kyoto University, 2017. FY27 Commissioned for the disaster presentation on nuclear facilities (Drilling borehole survey at the Nojima fault), Technical Report. (in Japanese)2) T. Fukuchi, 2001, Assessment of fault activity by ESR dating of fault gouge; an example of the 500m core samples drilled into the Nojima Earthquake Fault in Japan. Quaternary Science Reviews, 20, 1005-1008.

  16. Soil moisture content estimation using ground-penetrating radar reflection data

    NASA Astrophysics Data System (ADS)

    Lunt, I. A.; Hubbard, S. S.; Rubin, Y.

    2005-06-01

    Ground-penetrating radar (GPR) reflection travel time data were used to estimate changes in soil water content under a range of soil saturation conditions throughout the growing season at a California winery. Data were collected during three data acquisition campaigns over an 80 by 180 m area using 100 MHz surface GPR antennas. GPR reflections were associated with a thin, low permeability clay layer located 0.8-1.3 m below the ground surface that was identified from borehole information and mapped across the study area. Field infiltration tests and neutron probe logs suggest that the thin clay layer inhibited vertical water flow, and was coincident with high volumetric water content (VWC) values. The GPR reflection two-way travel time and the depth of the reflector at the borehole locations were used to calculate an average dielectric constant for soils above the reflector. A site-specific relationship between the dielectric constant and VWC was then used to estimate the depth-averaged VWC of the soils above the reflector. Compared to average VWC measurements from calibrated neutron probe logs over the same depth interval, the average VWC estimates obtained from GPR reflections had an RMS error of 0.018 m 3 m -3. These results suggested that the two-way travel time to a GPR reflection associated with a geological surface could be used under natural conditions to obtain estimates of average water content when borehole control is available and the reflection strength is sufficient. The GPR reflection method therefore, has potential for monitoring soil water content over large areas and under variable hydrological conditions.

  17. Integrating aeromagnetic and Landsat™ 8 data into subsurface structural mapping of Precambrian basement complex

    NASA Astrophysics Data System (ADS)

    Kayode, John Stephen; Nawawi, M. N. M.; Abdullah, Khiruddin B.; Khalil, Amin E.

    2017-01-01

    The integration of Aeromagnetic data and remotely sensed imagery with the intents of mapping the subsurface geological structures in part of the South-western basement complex of Nigeria was developed using the PCI Geomatica Software. 2013. The data obtained from the Nigerian Geological Survey Agency; was corrected using Regional Residual Separation of the Total Magnetic field anomalies enhanced, and International Geomagnetic Reference Field removed. The principal objective of this study is, therefore, to introduce a rapid and efficient method of subsurface structural depth estimate and structural index evaluation through the incorporation of the Euler Deconvolution technique into PCI Geomatica 2013 to prospect for subsurface geological structures. The shape and depth of burial helped to define these structures from the regional aeromagnetic map. The method enabled various structural indices to be automatically delineated for an index of between 0.5 SI and 3.0 SI at a maximum depth of 1.1 km that clearly showed the best depths estimate for all the structural indices. The results delineate two major magnetic belts in the area; the first belt shows an elongated ridge-like structure trending mostly along the NorthNortheast-SouthSouthwest and the other anomalies trends primarily in the Northeast, Northwest, Northeast-Southwest parts of the study area that could be attributed to basement complex granitic intrusions from the tectonic history of the area. The majority of the second structures showed various linear structures different from the first structure. Basically, a significant offset was delineated at the core segment of the study area, suggesting a major subsurface geological feature that controls mineralisation in this area.

  18. Seismic velocity structure and microearthquake source properties at The Geysers, California, geothermal area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Connell, D.R.

    1986-12-01

    The method of progressive hypocenter-velocity inversion has been extended to incorporate S-wave arrival time data and to estimate S-wave velocities in addition to P-wave velocities. S-wave data to progressive inversion does not completely eliminate hypocenter-velocity tradeoffs, but they are substantially reduced. Results of a P and S-wave progressive hypocenter-velocity inversion at The Geysers show that the top of the steam reservoir is clearly defined by a large decrease of V/sub p//V/sub s/ at the condensation zone-production zone contact. The depth interval of maximum steam production coincides with minimum observed V/sub p//V/sub s/, and V/sub p//V/sub s/ increses below the shallowmore » primary production zone suggesting that reservoir rock becomes more fluid saturated. The moment tensor inversion method was applied to three microearthquakes at The Geysers. Estimated principal stress orientations were comparable to those estimated using P-wave firstmotions as constraints. Well constrained principal stress orientations were obtained for one event for which the 17 P-first motions could not distinguish between normal-slip and strike-slip mechanisms. The moment tensor estimates of principal stress orientations were obtained using far fewer stations than required for first-motion focal mechanism solutions. The three focal mechanisms obtained here support the hypothesis that focal mechanisms are a function of depth at The Geysers. Progressive inversion as developed here and the moment tensor inversion method provide a complete approach for determining earthquake locations, P and S-wave velocity structure, and earthquake source mechanisms.« less

  19. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  20. Evaluation of multiple tracer methods to estimate low groundwater flow velocities.

    PubMed

    Reimus, Paul W; Arnold, Bill W

    2017-04-01

    Four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or "shut-in" periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity data are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a "ground truth" velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. The advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them are discussed. Published by Elsevier B.V.

  1. Volume of Valley Networks on Mars and Its Hydrologic Implications

    NASA Astrophysics Data System (ADS)

    Luo, W.; Cang, X.; Howard, A. D.; Heo, J.

    2015-12-01

    Valley networks on Mars are river-like features that offer the best evidence for water activities in its geologic past. Previous studies have extracted valley network lines automatically from digital elevation model (DEM) data and manually from remotely sensed images. The volume of material removed by valley networks is an important parameter that could help us infer the amount of water needed to carve the valleys. A progressive black top hat (PBTH) transformation algorithm has been adapted from image processing to extract valley volume and successfully applied to simulated landform and Ma'adim Valles, Mars. However, the volume of valley network excavation on Mars has not been estimated on a global scale. In this study, the PBTH method was applied to the whole Mars to estimate this important parameter. The process was automated with Python in ArcGIS. Polygons delineating the valley associated depressions were generated by using a multi-flow direction growth method, which started with selected high point seeds on a depth grid (essentially an inverted valley) created by PBTH transformation and grew outward following multi-flow direction on the depth grid. Two published versions of valley network lines were integrated to automatically select depression polygons that represent the valleys. Some crater depressions that are connected with valleys and thus selected in the previous step were removed by using information from a crater database. Because of large distortion associated with global dataset in projected maps, the volume of each cell within a valley was calculated using the depth of the cell multiplied by the spherical area of the cell. The volumes of all the valley cells were then summed to produce the estimate of global valley excavation volume. Our initial result of this estimate was ~2.4×1014 m3. Assuming a sediment density of 2900 kg/m3, a porosity of 0.35, and a sediment load of 1.5 kg/m3, the global volume of water needed to carve the valleys was estimated to be ~7.1×1017 m3. Because of the coarse resolution of MOLA data, this is a conservative lower bound. Comparing with the hypothesized northern ocean volume 2.3×1016 m3 estimated by Carr and Head (2003), our estimate of water volume suggests and confirms an active hydrologic cycle for early Mars. Further hydrologic analysis will improve the estimate accuracy.

  2. Estimation of the depth to the fresh-water/salt-water interface from vertical head gradients in wells in coastal and island aquifers

    NASA Astrophysics Data System (ADS)

    Izuka, Scot K.; Gingerich, Stephen B.

    An accurate estimate of the depth to the theoretical interface between fresh, water and salt water is critical to estimates of well yields in coastal and island aquifers. The Ghyben-Herzberg relation, which is commonly used to estimate interface depth, can greatly underestimate or overestimate the fresh-water thickness, because it assumes no vertical head gradients and no vertical flow. Estimation of the interface depth needs to consider the vertical head gradients and aquifer anisotropy that may be present. This paper presents a method to calculate vertical head gradients using water-level measurements made during drilling of a partially penetrating well; the gradient is then used to estimate interface depth. Application of the method to a numerically simulated fresh-water/salt-water system shows that the method is most accurate when the gradient is measured in a deeply penetrating well. Even using a shallow well, the method more accurately estimates the interface position than does the Ghyben-Herzberg relation where substantial vertical head gradients exist. Application of the method to field data shows that drilling, collection methods of water-level data, and aquifer inhomogeneities can cause difficulties, but the effects of these difficulties can be minimized. Résumé Une estimation précise de la profondeur de l'interface théorique entre l'eau douce et l'eau salée est un élément critique dans les estimations de rendement des puits dans les aquifères insulaires et littoraux. La relation de Ghyben-Herzberg, qui est habituellement utilisée pour estimer la profondeur de cette interface, peut fortement sous-estimer ou surestimer l'épaisseur de l'eau douce, parce qu'elle suppose l'absence de gradient vertical de charge et d'écoulement vertical. L'estimation de la profondeur de l'interface requiert de prendre en considération les gradients verticaux de charge et l'éventuelle anisotropie de l'aquifère. Cet article propose une méthode de calcul des gradients verticaux de charge à partir des mesures de niveau piézométrique faites en cours de foration d'un puits incomplet; le gradient est alors utilisé pour estimer la profondeur de l'interface. L'application de cette méthode à un système eau douce - eau salée simulé numériquement montre que la méthode est la plus précise lorsque le gradient est mesuré dans un puits pénétrant profondément dans l'aquifère. Même en utilisant un puits peu profond, la méthode estime la position de l'interface avec plus de précision que ne le fait la relation de Ghyben-Herzberg lorsqu'il existe un gradient vertical de charge bien marqué. L'application de la méthode à des données de terrain montre que la foration, les méthodes de mesure de niveau et les hétérogénéités au sein de l'aquifère peuvent être la cause de difficultés, mais que les effets de ces difficultés peuvent être réduits. Resumen Para la estimación de la productividad de pozos en acuíferos costeros y en islas es necesaria una estimación precisa de la profundidad de la interfaz teórica entre agua dulce y agua salada. La relación de Ghyben-Herzberg, usada habitualmente para estimar la profundidad de la interfaz, puede subestimar o sobrestimar el espesor de agua dulce, al asumir la ausencia de flujos y gradientes verticales. La estimación de la profundidad de la interfaz debe considerar tanto estos gradientes verticales, como la posible anisotropía del acuífero. En este artículo se presenta un método para calcular los gradientes verticales de niveles a partir de las medidas obtenidas durante la perforación de un pozo parcialmente penetrante para, a partir de este gradiente, estimar la profundidad de la interfaz. La aplicación del método a un sistema de agua dulce/agua salada simulado numéricamente muestra que el método es más preciso cuando el gradiente se mide en un pozo profundo. Incluso en el caso de un pozo superficial, el método permite una estimación más precisa de la profundidad de la interfaz que la aplicación de la fórmula de Ghyben-Herzberg, en los casos en los que existen gradientes verticales significativos. La aplicación del método a datos reales muestra que la perforación, la recogida de datos de niveles y la heterogeneidad en el acuífero pueden causar dificultades en la aplicación del método, pero que estas pueden minimizarse.

  3. Estimation of groundwater consumption by phreatophytes using diurnal water table fluctuations: A saturated‐unsaturated flow assessment

    USGS Publications Warehouse

    Loheide, Steven P.; Butler, James J.; Gorelick, Steven M.

    2005-01-01

    Groundwater consumption by phreatophytes is a difficult‐to‐measure but important component of the water budget in many arid and semiarid environments. Over the past 70 years the consumptive use of groundwater by phreatophytes has been estimated using a method that analyzes diurnal trends in hydrographs from wells that are screened across the water table (White, 1932). The reliability of estimates obtained with this approach has never been rigorously evaluated using saturated‐unsaturated flow simulation. We present such an evaluation for common flow geometries and a range of hydraulic properties. Results indicate that the major source of error in the White method is the uncertainty in the estimate of specific yield. Evapotranspirative consumption of groundwater will often be significantly overpredicted with the White method if the effects of drainage time and the depth to the water table on specific yield are ignored. We utilize the concept of readily available specific yield as the basis for estimation of the specific yield value appropriate for use with the White method. Guidelines are defined for estimating readily available specific yield based on sediment texture. Use of these guidelines with the White method should enable the evapotranspirative consumption of groundwater to be more accurately quantified.

  4. Pier and contraction scour prediction in cohesive soils at selected bridges in Illinois

    USGS Publications Warehouse

    Straub, Timothy D.; Over, Thomas M.

    2010-01-01

    This report presents the results of testing the Scour Rate In Cohesive Soils-Erosion Function Apparatus (SRICOS-EFA) method for estimating scour depth of cohesive soils at 15 bridges in Illinois. The SRICOS-EFA method for complex pier and contraction scour in cohesive soils has two primary components. The first component includes the calculation of the maximum contraction and pier scour (Zmax). The second component is an integrated approach that considers a time factor, soil properties, and continued interaction between the contraction and pier scour (SRICOS runs). The SRICOS-EFA results were compared to scour prediction results for non-cohesive soils based on Hydraulic Engineering Circular No. 18 (HEC-18). On average, the HEC-18 method predicted higher scour depths than the SRICOS-EFA method. A reduction factor was determined for each HEC-18 result to make it match the maximum of three types of SRICOS run results. The unconfined compressive strength (Qu) for the soil was then matched with the reduction factor and the results were ranked in order of increasing Qu. Reduction factors were then grouped by Qu and applied to each bridge site and soil. These results, and comparison with the SRICOS Zmax calculation, show that less than half of the reduction-factor method values were the lowest estimate of scour; whereas, the Zmax method values were the lowest estimate for over half. A tiered approach to predicting pier and contraction scour was developed. There are four levels to this approach numbered in order of complexity, with the fourth level being a full SRICOS-EFA analysis. Levels 1 and 2 involve the reduction factors and Zmax calculation, and can be completed without EFA data. Level 3 requires some surrogate EFA data. Levels 3 and 4 require streamflow for input into SRICOS. Estimation techniques for both EFA surrogate data and streamflow data were developed.

  5. A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES

    EPA Science Inventory

    Satellite data provide new opportunities to study the regional distribution of particulate matter. The aerosol optical depth (AOD) - a derived estimate from the satellite measured irradiance, can be compared against model derived estimate to provide an evaluation of the columnar ...

  6. Geophysical mapping of palsa peatland permafrost

    NASA Astrophysics Data System (ADS)

    Sjöberg, Y.; Marklund, P.; Pettersson, R.; Lyon, S. W.

    2014-10-01

    Permafrost peatlands are hydrological and biogeochemical hotspots in the discontinuous permafrost zone. Non-intrusive geophysical methods offer possibility to map current permafrost spatial distributions in these environments. In this study, we estimate the depths to the permafrost table surface and base across a peatland in northern Sweden, using ground penetrating radar and electrical resistivity tomography. Seasonal thaw frost tables (at ~0.5 m depth), taliks (2.1-6.7 m deep), and the permafrost base (at ~16 m depth) could be detected. Higher occurrences of taliks were discovered at locations with a lower relative height of permafrost landforms indicative of lower ground ice content at these locations. These results highlight the added value of combining geophysical techniques for assessing spatial distribution of permafrost within the rapidly changing sporadic permafrost zone. For example, based on a simple thought experiment for the site considered here, we estimated that the thickest permafrost could thaw out completely within the next two centuries. There is a clear need, thus, to benchmark current permafrost distributions and characteristics particularly in under studied regions of the pan-arctic.

  7. Geophysical mapping of palsa peatland permafrost

    NASA Astrophysics Data System (ADS)

    Sjöberg, Y.; Marklund, P.; Pettersson, R.; Lyon, S. W.

    2015-03-01

    Permafrost peatlands are hydrological and biogeochemical hotspots in the discontinuous permafrost zone. Non-intrusive geophysical methods offer a possibility to map current permafrost spatial distributions in these environments. In this study, we estimate the depths to the permafrost table and base across a peatland in northern Sweden, using ground penetrating radar and electrical resistivity tomography. Seasonal thaw frost tables (at ~0.5 m depth), taliks (2.1-6.7 m deep), and the permafrost base (at ~16 m depth) could be detected. Higher occurrences of taliks were discovered at locations with a lower relative height of permafrost landforms, which is indicative of lower ground ice content at these locations. These results highlight the added value of combining geophysical techniques for assessing spatial distributions of permafrost within the rapidly changing sporadic permafrost zone. For example, based on a back-of-the-envelope calculation for the site considered here, we estimated that the permafrost could thaw completely within the next 3 centuries. Thus there is a clear need to benchmark current permafrost distributions and characteristics, particularly in under studied regions of the pan-Arctic.

  8. Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.

    PubMed

    Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin

    2018-04-02

    Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.

  9. The capability of professional- and lay-rescuers to estimate the chest compression-depth target: a short, randomized experiment.

    PubMed

    van Tulder, Raphael; Laggner, Roberta; Kienbacher, Calvin; Schmid, Bernhard; Zajicek, Andreas; Haidvogel, Jochen; Sebald, Dieter; Laggner, Anton N; Herkner, Harald; Sterz, Fritz; Eisenburger, Philip

    2015-04-01

    In CPR, sufficient compression depth is essential. The American Heart Association ("at least 5cm", AHA-R) and the European Resuscitation Council ("at least 5cm, but not to exceed 6cm", ERC-R) recommendations differ, and both are hardly achieved. This study aims to investigate the effects of differing target depth instructions on compression depth performances of professional and lay-rescuers. 110 professional-rescuers and 110 lay-rescuers were randomized (1:1, 4 groups) to estimate the AHA-R or ERC-R on a paper sheet (given horizontal axis) using a pencil and to perform chest compressions according to AHA-R or ERC-R on a manikin. Distance estimation and compression depth were the outcome variables. Professional-rescuers estimated the distance according to AHA-R in 19/55 (34.5%) and to ERC-R in 20/55 (36.4%) cases (p=0.84). Professional-rescuers achieved correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 36/55 (65.4%) cases (p=0.97). Lay-rescuers estimated the distance correctly according to AHA-R in 18/55 (32.7%) and to ERC-R in 20/55 (36.4%) cases (p=0.59). Lay-rescuers yielded correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 26/55 (47.3%) cases (p=0.02). Professional and lay-rescuers have severe difficulties in correctly estimating distance on a sheet of paper. Professional-rescuers are able to yield AHA-R and ERC-R targets likewise. In lay-rescuers AHA-R was associated with significantly higher success rates. The inability to estimate distance could explain the failure to appropriately perform chest compressions. For teaching lay-rescuers, the AHA-R with no upper limit of compression depth might be preferable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Applications of flood depth from rapid post-event footprint generation

    NASA Astrophysics Data System (ADS)

    Booth, Naomi; Millinship, Ian

    2015-04-01

    Immediately following large flood events, an indication of the area flooded (i.e. the flood footprint) can be extremely useful for evaluating potential impacts on exposed property and infrastructure. Specifically, such information can help insurance companies estimate overall potential losses, deploy claims adjusters and ultimately assists the timely payment of due compensation to the public. Developing these datasets from remotely sensed products seems like an obvious choice. However, there are a number of important drawbacks which limit their utility in the context of flood risk studies. For example, external agencies have no control over the region that is surveyed, the time at which it is surveyed (which is important as the maximum extent would ideally be captured), and how freely accessible the outputs are. Moreover, the spatial resolution of these datasets can be low, and considerable uncertainties in the flood extents exist where dry surfaces give similar return signals to water. Most importantly of all, flood depths are required to estimate potential damages, but generally cannot be estimated from satellite imagery alone. In response to these problems, we have developed an alternative methodology for developing high-resolution footprints of maximum flood extent which do contain depth information. For a particular event, once reports of heavy rainfall are received, we begin monitoring real-time flow data and extracting peak values across affected areas. Next, using statistical extreme value analyses of historic flow records at the same measured locations, the return periods of the maximum event flow at each gauged location are estimated. These return periods are then interpolated along each river and matched to JBA's high-resolution hazard maps, which already exist for a series of design return periods. The extent and depth of flooding associated with the event flow is extracted from the hazard maps to create a flood footprint. Georeferenced ground, aerial and satellite images are used to establish defence integrity, highlight breach locations and validate our footprint. We have implemented this method to create seven flood footprints, including river flooding in central Europe and coastal flooding associated with Storm Xaver in the UK (both in 2013). The inclusion of depth information allows damages to be simulated and compared to actual damage and resultant loss which become available after the event. In this way, we can evaluate depth-damage functions used in catastrophe models and reduce their associated uncertainty. In further studies, the depth data could be used at an individual property level to calibrate property type specific depth-damage functions.

  11. Depth-Resolved Multispectral Sub-Surface Imaging Using Multifunctional Upconversion Phosphors with Paramagnetic Properties

    PubMed Central

    Ovanesyan, Zaven; Mimun, L. Christopher; Kumar, Gangadharan Ajith; Yust, Brian G.; Dannangoda, Chamath; Martirosyan, Karen S.; Sardar, Dhiraj K.

    2015-01-01

    Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519

  12. Video stereolization: combining motion analysis with user interaction.

    PubMed

    Liao, Miao; Gao, Jizhou; Yang, Ruigang; Gong, Minglun

    2012-07-01

    We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely used structure from motion (SFM) techniques, we develop two new methods that analyze the optical flow to provide additional qualitative depth constraints. They remove the camera movement restriction imposed by SFM so that general motions can be used in scene depth estimation-the central problem in mono-to-stereo conversion. With these algorithms, the user's labeling task is significantly simplified. We further developed a quadratic programming approach to incorporate both quantitative depth and qualitative depth (such as these from user scribbling) to recover dense depth maps for all frames, from which stereoscopic view can be synthesized. In addition to visual results, we present user study results showing that our approach is more intuitive and less labor intensive, while producing 3D effect comparable to that from current state-of-the-art interactive algorithms.

  13. Fiber-optic annular detector array for large depth of field photoacoustic macroscopy.

    PubMed

    Bauer-Marschallinger, Johannes; Höllinger, Astrid; Jakoby, Bernhard; Burgholzer, Peter; Berer, Thomas

    2017-03-01

    We report on a novel imaging system for large depth of field photoacoustic scanning macroscopy. Instead of commonly used piezoelectric transducers, fiber-optic based ultrasound detection is applied. The optical fibers are shaped into rings and mainly receive ultrasonic signals stemming from the ring symmetry axes. Four concentric fiber-optic rings with varying diameters are used in order to increase the image quality. Imaging artifacts, originating from the off-axis sensitivity of the rings, are reduced by coherence weighting. We discuss the working principle of the system and present experimental results on tissue mimicking phantoms. The lateral resolution is estimated to be below 200 μm at a depth of 1.5 cm and below 230 μm at a depth of 4.5 cm. The minimum detectable pressure is in the order of 3 Pa. The introduced method has the potential to provide larger imaging depths than acoustic resolution photoacoustic microscopy and an imaging resolution similar to that of photoacoustic computed tomography.

  14. Lateral variations of the Guerrero-Oaxaca subduction zone (Mexico) derived from weak seismicity (Mb3.5+) detected on a single array at teleseismic distance

    NASA Astrophysics Data System (ADS)

    Letort, Jean; Retailleau, Lise; Boué, Pierre; Radiguet, Mathilde; Gardonio, Blandine; Cotton, Fabrice; Campillo, Michel

    2018-05-01

    Detections of pP and sP phase arrivals (the so-called depth phases) at teleseismic distance provide one of the best ways to estimate earthquake focal depth, as the P-pP and the P-sP delays are strongly dependent on the depth. Based on a new processing workflow and using a single seismic array at teleseismic distance, we can estimate the depth of clusters of small events down to magnitude Mb 3.5. Our method provides a direct view of the relative variations of the seismicity depth from an active area. This study focuses on the application of this new methodology to study the lateral variations of the Guerrero subduction zone (Mexico) using the Eielson seismic array in Alaska (USA). After denoising the signals, 1232 Mb 3.5 + events were detected, with clear P, pP, sP and PcP arrivals. A high-resolution view of the lateral variations of the depth of the seismicity of the Guerero-Oaxaca area is thus obtained. The seismicity is shown to be mainly clustered along the interface, coherently following the geometry of the plate as constrained by the receiver-function analysis along the Meso America Subduction Experiment profile. From this study, the hypothesis of tears on the western part of Guerrero and the eastern part of Oaxaca are strongly confirmed by dramatic lateral changes in the depth of the earthquake clusters. The presence of these two tears might explain the observed lateral variations in seismicity, which is correlated with the boundaries of the slow slip events.

  15. Depth-dependence of time-lapse seismic velocity change detected by a joint interferometric analysis of vertical array data

    NASA Astrophysics Data System (ADS)

    Sawazaki, K.; Saito, T.; Ueno, T.; Shiomi, K.

    2015-12-01

    In this study, utilizing depth-sensitivity of interferometric waveforms recorded by co-located Hi-net and KiK-net sensors, we separate the responsible depth of seismic velocity change associated with the M6.3 earthquake occurred on November 22, 2014, in central Japan. The Hi-net station N.MKGH is located about 20 km northeast from the epicenter, where the seismometer is installed at the 150 m depth. At the same site, the KiK-net has two strong motion seismometers installed at the depths of 0 and 150 m. To estimate average velocity change around the N.MKGH station, we apply the stretching technique to auto-correlation function (ACF) of ambient noise recorded by the Hi-net sensor. To evaluate sensitivity of the Hi-net ACF to velocity change above and below the 150 m depth, we perform a numerical wave propagation simulation using 2-D FDM. To obtain velocity change above the 150 m depth, we measure response waveform from the depths of 150 m to 0 m by computing deconvolution function (DCF) of earthquake records obtained by the two KiK-net vertical array sensors. The background annual velocity variation is subtracted from the detected velocity change. From the KiK-net DCF records, the velocity reduction ratio above the 150 m depth is estimated to be 4.2 % and 3.1 % in the periods of 1-7 days and 7 days - 4 months after the mainshock, respectively. From the Hi-net ACF records, the velocity reduction ratio is estimated to be 2.2 % and 1.8 % in the same time periods, respectively. This difference in the estimated velocity reduction ratio is attributed to depth-dependence of the velocity change. By using the depth sensitivity obtained from the numerical simulation, we estimate the velocity reduction ratio below the 150 m depth to be lower than 1.0 % for both time periods. Thus the significant velocity reduction and recovery are observed above the 150 m depth only, which may be caused by strong ground motion of the mainshock and following healing in the shallow ground.

  16. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  17. A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES

    EPA Science Inventory

    Satellite data provide new opportunities to study the regional distribution of particulate matter.

    The aerosol optical depth (AOD) - a derived estimate from the satellite-measured radiance, can be compared against model estimates to provide an evaluation of the columnar ae...

  18. Estimation of infiltration and hydraulic resistance in furrow irrigation, with infiltration dependent on flow depth

    USDA-ARS?s Scientific Manuscript database

    The estimation of parameters of a flow-depth dependent furrow infiltration model and of hydraulic resistance, using irrigation evaluation data, was investigated. The estimated infiltration parameters are the saturated hydraulic conductivity and the macropore volume per unit area. Infiltration throu...

  19. A magnetic and gravity investigation of the Liberia Basin, West Africa

    NASA Astrophysics Data System (ADS)

    Morris Cooper, S.; Liu, Tianyou

    2011-02-01

    Gravity and magnetic analysis provide an opportunity to deduce and understand to a large extent the stratigraphy, structure and shape of the substructure. Euler deconvolution is a useful tool for providing estimates of the localities and depth of magnetic and gravity sources. Wavelet analysis is an interesting tool for filtering and improving geophysical data. The application of these two methods to gravity and magnetic data of the Liberia Basin enable the definition of the geometry and depth of the subsurface geologic structures. The study reveals the basin is sub-divided and the depth to basement of the basin structure ranges from about 5 km at its North West end to 10 km at its broadest section eastward. Magnetic data analysis indicates shallow intrusives ranging from a depth of 0.09 km to 0.42 km with an average depth of 0.25 km along the margin. Other intrusives can be found at average depths of 0.6 km and 1.7 km respectively within the confines of the basin. An analysis of the gravity data indicated deep faults intersecting the transform zone.

  20. Identification of depth information with stereoscopic mammography using different display methods

    NASA Astrophysics Data System (ADS)

    Morikawa, Takamitsu; Kodera, Yoshie

    2013-03-01

    Stereoscopy in radiography was widely used in the late 80's because it could be used for capturing complex structures in the human body, thus proving beneficial for diagnosis and screening. When radiologists observed the images stereoscopically, radiologists usually needed the training of their eyes in order to perceive the stereoscopic effect. However, with the development of three-dimensional (3D) monitors and their use in the medical field, only a visual inspection is no longer required in the medical field. The question then arises as to whether there is any difference in recognizing depth information when using conventional methods and that when using a 3D monitor. We constructed a phantom and evaluated the difference in capacity to identify the depth information between the two methods. The phantom consists of acryl steps and 3mm diameter acryl pillars on the top and bottom of each step. Seven observers viewed these images stereoscopically using the two display methods and were asked to judge the direction of the pillar that was on the top. We compared these judged direction with the direction of the real pillar arranged on the top, and calculated the percentage of correct answerers (PCA). The results showed that PCA obtained using the 3D monitor method was higher PCA by about 5% than that obtained using the naked-eye method. This indicated that people could view images stereoscopically more precisely using the 3D monitor method than when using with conventional methods, like the crossed or parallel eye viewing. We were able to estimate the difference in capacity to identify the depth information between the two display methods.

  1. Stress estimation in reservoirs using an integrated inverse method

    NASA Astrophysics Data System (ADS)

    Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre

    2018-05-01

    Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.

  2. A Review of Methods Applied by the U.S. Geological Survey in the Assessment of Identified Geothermal Resources

    USGS Publications Warehouse

    Williams, Colin F.; Reed, Marshall J.; Mariner, Robert H.

    2008-01-01

    The U. S. Geological Survey (USGS) is conducting an updated assessment of geothermal resources in the United States. The primary method applied in assessments of identified geothermal systems by the USGS and other organizations is the volume method, in which the recoverable heat is estimated from the thermal energy available in a reservoir. An important focus in the assessment project is on the development of geothermal resource models consistent with the production histories and observed characteristics of exploited geothermal fields. The new assessment will incorporate some changes in the models for temperature and depth ranges for electric power production, preferred chemical geothermometers for estimates of reservoir temperatures, estimates of reservoir volumes, and geothermal energy recovery factors. Monte Carlo simulations are used to characterize uncertainties in the estimates of electric power generation. These new models for the recovery of heat from heterogeneous, fractured reservoirs provide a physically realistic basis for evaluating the production potential of natural geothermal reservoirs.

  3. Measuring impact crater depth throughout the solar system

    USGS Publications Warehouse

    Robbins, Stuart J.; Watters, Wesley A.; Chappelow, John E.; Bray, Veronica J.; Daubar, Ingrid J.; Craddock, Robert A.; Beyer, Ross A.; Landis, Margaret E.; Ostrach, Lillian; Tornabene, Livio L.; Riggs, Jamie D.; Weaver, Brian P.

    2018-01-01

    One important, almost ubiquitous, tool for understanding the surfaces of solid bodies throughout the solar system is the study of impact craters. While measuring a distribution of crater diameters and locations is an important tool for a wide variety of studies, so too is measuring a crater's “depth.” Depth can inform numerous studies including the strength of a surface and modification rates in the local environment. There is, however, no standard data set, definition, or technique to perform this data‐gathering task, and the abundance of different definitions of “depth” and methods for estimating that quantity can lead to misunderstandings in and of the literature. In this review, we describe a wide variety of data sets and methods to analyze those data sets that have been, are currently, or could be used to derive different types of crater depth measurements. We also recommend certain nomenclature in doing so to help standardize practice in the field. We present a review section of all crater depths that have been published on different solar system bodies which shows how the field has evolved through time and how some common assumptions might not be wholly accurate. We conclude with several recommendations for researchers which could help different data sets to be more easily understood and compared.

  4. Effects of Optical Combiner and IPD Change for Convergence on Near-Field Depth Perception in an Optical See-Through HMD.

    PubMed

    Lee, Sangyoon; Hu, Xinda; Hua, Hong

    2016-05-01

    Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.

  5. Carbon storage in Chinese grassland ecosystems: Influence of different integrative methods.

    PubMed

    Ma, Anna; He, Nianpeng; Yu, Guirui; Wen, Ding; Peng, Shunlei

    2016-02-17

    The accurate estimate of grassland carbon (C) is affected by many factors at the large scale. Here, we used six methods (three spatial interpolation methods and three grassland classification methods) to estimate C storage of Chinese grasslands based on published data from 2004 to 2014, and assessed the uncertainty resulting from different integrative methods. The uncertainty (coefficient of variation, CV, %) of grassland C storage was approximately 4.8% for the six methods tested, which was mainly determined by soil C storage. C density and C storage to the soil layer depth of 100 cm were estimated to be 8.46 ± 0.41 kg C m(-2) and 30.98 ± 1.25 Pg C, respectively. Ecosystem C storage was composed of 0.23 ± 0.01 (0.7%) above-ground biomass, 1.38 ± 0.14 (4.5%) below-ground biomass, and 29.37 ± 1.2 (94.8%) Pg C in the 0-100 cm soil layer. Carbon storage calculated by the grassland classification methods (18 grassland types) was closer to the mean value than those calculated by the spatial interpolation methods. Differences in integrative methods may partially explain the high uncertainty in C storage estimates in different studies. This first evaluation demonstrates the importance of multi-methodological approaches to accurately estimate C storage in large-scale terrestrial ecosystems.

  6. Hydrogeologic structure underlying a recharge pond delineated with shear-wave seismic reflection and cone penetrometer data

    USGS Publications Warehouse

    Haines, S.S.; Pidlisecky, Adam; Knight, R.

    2009-01-01

    With the goal of improving the understanding of the subsurface structure beneath the Harkins Slough recharge pond in Pajaro Valley, California, USA, we have undertaken a multimodal approach to develop a robust velocity model to yield an accurate seismic reflection section. Our shear-wave reflection section helps us identify and map an important and previously unknown flow barrier at depth; it also helps us map other relevant structure within the surficial aquifer. Development of an accurate velocity model is essential for depth conversion and interpretation of the reflection section. We incorporate information provided by shear-wave seismic methods along with cone penetrometer testing and seismic cone penetrometer testing measurements. One velocity model is based on reflected and refracted arrivals and provides reliable velocity estimates for the full depth range of interest when anchored on interface depths determined from cone data and borehole drillers' logs. A second velocity model is based on seismic cone penetrometer testing data that provide higher-resolution ID velocity columns with error estimates within the depth range of the cone penetrometer testing. Comparison of the reflection/refraction model with the seismic cone penetrometer testing model also suggests that the mass of the cone truck can influence velocity with the equivalent effect of approximately one metre of extra overburden stress. Together, these velocity models and the depth-converted reflection section result in a better constrained hydrologic model of the subsurface and illustrate the pivotal role that cone data can provide in the reflection processing workflow. ?? 2009 European Association of Geoscientists & Engineers.

  7. Concealed object segmentation and three-dimensional localization with passive millimeter-wave imaging

    NASA Astrophysics Data System (ADS)

    Yeom, Seokwon

    2013-05-01

    Millimeter waves imaging draws increasing attention in security applications for weapon detection under clothing. In this paper, concealed object segmentation and three-dimensional localization schemes are reviewed. A concealed object is segmented by the k-means algorithm. A feature-based stereo-matching method estimates the longitudinal distance of the concealed object. The distance is estimated by the discrepancy between the corresponding centers of the segmented objects. Experimental results are provided with the analysis of the depth resolution.

  8. Estimation of global snow cover using passive microwave data

    NASA Astrophysics Data System (ADS)

    Chang, Alfred T. C.; Kelly, Richard E.; Foster, James L.; Hall, Dorothy K.

    2003-04-01

    This paper describes an approach to estimate global snow cover using satellite passive microwave data. Snow cover is detected using the high frequency scattering signal from natural microwave radiation, which is observed by passive microwave instruments. Developed for the retrieval of global snow depth and snow water equivalent using Advanced Microwave Scanning Radiometer EOS (AMSR-E), the algorithm uses passive microwave radiation along with a microwave emission model and a snow grain growth model to estimate snow depth. The microwave emission model is based on the Dense Media Radiative Transfer (DMRT) model that uses the quasi-crystalline approach and sticky particle theory to predict the brightness temperature from a single layered snowpack. The grain growth model is a generic single layer model based on an empirical approach to predict snow grain size evolution with time. Gridding to the 25 km EASE-grid projection, a daily record of Special Sensor Microwave Imager (SSM/I) snow depth estimates was generated for December 2000 to March 2001. The estimates are tested using ground measurements from two continental-scale river catchments (Nelson River and the Ob River in Russia). This regional-scale testing of the algorithm shows that for passive microwave estimates, the average daily snow depth retrieval standard error between estimated and measured snow depths ranges from 0 cm to 40 cm of point observations. Bias characteristics are different for each basin. A fraction of the error is related to uncertainties about the grain growth initialization states and uncertainties about grain size changes through the winter season that directly affect the parameterization of the snow depth estimation in the DMRT model. Also, the algorithm does not include a correction for forest cover and this effect is clearly observed in the retrieval. Finally, error is also related to scale differences between in situ ground measurements and area-integrated satellite estimates. With AMSR-E data, improvements to snow depth and water equivalent estimates are expected since AMSR-E will have twice the spatial resolution of the SSM/I and will be able to characterize better the subnivean snow environment from an expanded range of microwave frequencies.

  9. The Characteristics of Peats and Co2 Emission Due to Fire in Industrial Plant Forests

    NASA Astrophysics Data System (ADS)

    Ratnaningsih, Ambar Tri; Rayahu Prasytaningsih, Sri

    2017-12-01

    Riau Province has a high threat to forest fire in peat soils, especially in industrial forest areas. The impact of fires will produce carbon (CO2) emissions in the atmosphere. The magnitude of carbon losses from the burning of peatlands can be estimated by knowing the characteristics of the fire peat and estimating CO2 emissions produced. The objectives of the study are to find out the characteristics of fire-burning peat, and to estimate carbon storage and CO2 emissions. The location of the research is in the area of industrial forest plantations located in Bengkalis Regency, Riau Province. The method used to measure peat carbon is the method of lost in ignation. The results showed that the research location has a peat depth of 600-800 cm which is considered very deep. The Peat fiber content ranges from 38 to 75, classified as hemic peat. The average bulk density was 0.253 gram cm-3 (0.087-0,896 gram cm-3). The soil ash content is 2.24% and the stored peat carbon stock with 8 meter peat thickness is 10723,69 ton ha-1. Forest fire was predicted to burn peat to a depth of 100 cm and produced CO2 emissions of 6,355,809 tons ha-1.

  10. Experimental Results of Underwater Cooperative Source Localization Using a Single Acoustic Vector Sensor

    PubMed Central

    Felisberto, Paulo; Rodriguez, Orlando; Santos, Paulo; Ey, Emanuel; Jesus, Sérgio M.

    2013-01-01

    This paper aims at estimating the azimuth, range and depth of a cooperative broadband acoustic source with a single vector sensor in a multipath underwater environment, where the received signal is assumed to be a linear combination of echoes of the source emitted waveform. A vector sensor is a device that measures the scalar acoustic pressure field and the vectorial acoustic particle velocity field at a single location in space. The amplitudes of the echoes in the vector sensor components allow one to determine their azimuth and elevation. Assuming that the environmental conditions of the channel are known, source range and depth are obtained from the estimates of elevation and relative time delays of the different echoes using a ray-based backpropagation algorithm. The proposed method is tested using simulated data and is further applied to experimental data from the Makai'05 experiment, where 8–14 kHz chirp signals were acquired by a vector sensor array. It is shown that for short ranges, the position of the source is estimated in agreement with the geometry of the experiment. The method is low computational demanding, thus well-suited to be used in mobile and light platforms, where space and power requirements are limited. PMID:23857257

  11. Multiscale site-response mapping: A case study of Parkfield, California

    USGS Publications Warehouse

    Thompson, E.M.; Baise, L.G.; Kayen, R.E.; Morgan, E.C.; Kaklamanos, J.

    2011-01-01

    The scale of previously proposed methods for mapping site-response ranges from global coverage down to individual urban regions. Typically, spatial coverage and accuracy are inversely related.We use the densely spaced strong-motion stations in Parkfield, California, to estimate the accuracy of different site-response mapping methods and demonstrate a method for integrating multiple site-response estimates from the site to the global scale. This method is simply a weighted mean of a suite of different estimates, where the weights are the inverse of the variance of the individual estimates. Thus, the dominant site-response model varies in space as a function of the accuracy of the different models. For mapping applications, site-response models should be judged in terms of both spatial coverage and the degree of correlation with observed amplifications. Performance varies with period, but in general the Parkfield data show that: (1) where a velocity profile is available, the square-rootof- impedance (SRI) method outperforms the measured VS30 (30 m divided by the S-wave travel time to 30 m depth) and (2) where velocity profiles are unavailable, the topographic slope method outperforms surficial geology for short periods, but geology outperforms slope at longer periods. We develop new equations to estimate site response from topographic slope, derived from the Next Generation Attenuation (NGA) database.

  12. Studying unsaturated epikarst water storage properties by time lapse surface to depth gravity measurements

    NASA Astrophysics Data System (ADS)

    Deville, S.; Champollion, C.; chery, J.; Doerflinger, E.; Le Moigne, N.; Bayer, R.; Vernant, P.

    2011-12-01

    The assessment of water storage in the unsaturated zone in karstic areas is particularly challenging. Indeed, water flow path and water storage occur in quite heterogeneous ways through small scale porosity, fractures, joints and large voids. Due to this large heterogeneity, it is therefore difficult to estimate the amount of water circulating in the vadose zone by hydrological means. One indirect method consists to measure the gravity variation associated to water storage and withdrawal. Here, we apply a gravimetric method in which the gravity is measured at the surface and at depth on different sites. Then the time variations of the surface to depth (STD) gravity differences are compared for each site. In this study we attempt to evaluate the magnitude of epikarstic water storage variation in various karst settings using a CG5 portable gravimeter. Surface to depth gravity measurements are performed two times a year since 2009 at the surface an inside caves at different depths on three karst aquifers in southern France : 1. A limestone site on the Larzac plateau with a vadose zone thickness of 300m On this site measurements are done on five locations at different depths going from 0 to 50 m; 2. A dolomitic site on the Larzac plateau (Durzon karst aquifer) with a vadose zone thickness of 200m; Measurements are taken at the surface and at 60m depth 3. A limestone site on the Hortus karst aquifer and "Larzac Septentrional karst aquifer") with a vadose zone thickness of only 35m. Measurements are taken at the surface and at 30m depth Therefore, our measurements are used in two ways : First, the STD differences between dry and wet seasons are used to estimate the capacity of differential storage of each aquifer. Surprisingly, the differential storage capacity of all the sites is relatively invariant despite their variable geological of hydrological contexts. Moreover, the STD gravity variations on site 1 show that no water storage variation occurs beneath 10m depth, suggesting that most of the differential storage is taken by the epikarst. Second, we use STD gravity differences to determine the effective density values for each site. These integrative density values are compared to measured grain densities from core samples in order to obtain the apparent porosity and saturation representative to the investigated volume. We then discuss the relation between the physical characteristic of each non-saturated zone and its water storage capacity. It seems that epikarst water storage variation is only weakly related to lithology. We also discuss the reasons for specific water storage in the epikarst. Because epikarst water storage has been claimed to be a general characteristic of karst system, a gravimetric approach appears to be a promising method to verify quantitatively this hypothesis.

  13. A Simple Visual Estimation of Food Consumption in Carnivores

    PubMed Central

    Potgieter, Katherine R.; Davies-Mostert, Harriet T.

    2012-01-01

    Belly-size ratings or belly scores are frequently used in carnivore research as a method of rating whether and how much an animal has eaten. This method provides only a rough ordinal measure of fullness and does not quantify the amount of food an animal has consumed. Here we present a method for estimating the amount of meat consumed by individual African wild dogs Lycaon pictus. We fed 0.5 kg pieces of meat to wild dogs being temporarily held in enclosures and measured the corresponding change in belly size using lateral side photographs taken perpendicular to the animal. The ratio of belly depth to body length was positively related to the mass of meat consumed and provided a useful estimate of the consumption. Similar relationships could be calculated to determine amounts consumed by other carnivores, thus providing a useful tool in the study of feeding behaviour. PMID:22567086

  14. Monocular Depth Perception and Robotic Grasping of Novel Objects

    DTIC Science & Technology

    2009-06-01

    resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show

  15. Groundwater-dependent ecosystems: recent insights from satellite and field-based studies

    NASA Astrophysics Data System (ADS)

    Eamus, D.; Zolfaghar, S.; Villalobos-Vega, R.; Cleverly, J.; Huete, A.

    2015-10-01

    Groundwater-dependent ecosystems (GDEs) are at risk globally due to unsustainable levels of groundwater extraction, especially in arid and semi-arid regions. In this review, we examine recent developments in the ecohydrology of GDEs with a focus on three knowledge gaps: (1) how do we locate GDEs, (2) how much water is transpired from shallow aquifers by GDEs and (3) what are the responses of GDEs to excessive groundwater extraction? The answers to these questions will determine water allocations that are required to sustain functioning of GDEs and to guide regulations on groundwater extraction to avoid negative impacts on GDEs. We discuss three methods for identifying GDEs: (1) techniques relying on remotely sensed information; (2) fluctuations in depth-to-groundwater that are associated with diurnal variations in transpiration; and (3) stable isotope analysis of water sources in the transpiration stream. We then discuss several methods for estimating rates of GW use, including direct measurement using sapflux or eddy covariance technologies, estimation of a climate wetness index within a Budyko framework, spatial distribution of evapotranspiration (ET) using remote sensing, groundwater modelling and stable isotopes. Remote sensing methods often rely on direct measurements to calibrate the relationship between vegetation indices and ET. ET from GDEs is also determined using hydrologic models of varying complexity, from the White method to fully coupled, variable saturation models. Combinations of methods are typically employed to obtain clearer insight into the components of groundwater discharge in GDEs, such as the proportional importance of transpiration versus evaporation (e.g. using stable isotopes) or from groundwater versus rainwater sources. Groundwater extraction can have severe consequences for the structure and function of GDEs. In the most extreme cases, phreatophytes experience crown dieback and death following groundwater drawdown. We provide a brief review of two case studies of the impacts of GW extraction and then provide an ecosystem-scale, multiple trait, integrated metric of the impact of differences in groundwater depth on the structure and function of eucalypt forests growing along a natural gradient in depth-to-groundwater. We conclude with a discussion of a depth-to-groundwater threshold in this mesic GDE. Beyond this threshold, significant changes occur in ecosystem structure and function.

  16. Source mechanism of the 2006 M5.1 Wen'an Earthquake determined from a joint inversion of local and teleseismic broadband waveform data

    NASA Astrophysics Data System (ADS)

    Huang, J.; Ni, S.; Niu, F.; Fu, R.

    2007-12-01

    On July 4th, 2006, a magnitude 5.1 earthquake occurred at Wen'an, {~}100 km south of Beijing, which was felt at Beijing metropolitan area. To better understand the regional tectonics, we have inverted local and teleseismic broadband waveform data to determine the focal mechanism of this earthquake. We selected waveform data of 9 stations from the recently installed Beijing metropolitan digital Seismic Network (BSN). These stations are located within 600 km and cover a good azimuthal range to the earthquake. To better fit the lower amplitude P waveform, we employed two different weights for the P wave and surface wave arrivals, respectively. A grid search method was employed to find the strike, dip and slip of the earthquake that best fits the P and surface waveforms recorded at all the three components (the tangential component of the P-wave arrivals was not used). Synthetic waveforms were computed with an F-K method. Two crustal velocity models were used in the synthetic calculation to reflect a rapid east-west transition in crustal structure observed by seismic and geological studies in the study area. The 3D grid search results in reasonable constraints on the fault geometry and the slip vector with a less well determined focal depth. As such we combined teleseismic waveform data from 8 stations of the Global Seismic Network in a joint inversion. Clearly identifiable depth phases (pP, sP) recorded in the teleseismic stations obviously provided a better constraint on the resulting source depth. Results from the joint inversion indicate that the Wen'an earthquake is mainly a right-lateral strike slip event (-150°) which occurred at a near vertical (dip, 80° ) NNE trend (210°º) fault. The estimated focal depth is {~}14- 15km, and the moment magnitude is 5.1. The estimated fault geometry here agrees well with aftershock distribution and is consistent with the major fault systems in the area which were developed under a NNE-SSW oriented compressional stress field. Key word: waveform modeling method, source mechanism, grid search method, cut and paste method, aftershocks distribution

  17. Is CO2 emission a side effect of financial development? An empirical analysis for China.

    PubMed

    Hao, Yu; Zhang, Zong-Yong; Liao, Hua; Wei, Yi-Ming; Wang, Shuo

    2016-10-01

    Based on panel data for 29 Chinese provinces from 1995 to 2012, this paper explores the relationship between financial development and environmental quality in China. A comprehensive framework is utilized to estimate both the direct and indirect effects of financial development on CO 2 emissions in China using a carefully designed two-stage regression model. The first-difference and orthogonal-deviation Generalized Method of Moments (GMM) methods are used to control for potential endogeneity and introduce dynamics. To ensure the robustness of the estimations, two indicators measuring financial development-financial depth and financial efficiency-are used. The empirical results indicate that the direct effects of financial depth and financial efficiency on environmental quality are positive and negative, respectively. The indirect effects of both indicators are U shaped and dominate the shape of the total effects. These findings suggest that the influences of the financial development on environment depend on the level of economic development. At the early stage of economic growth, financial development is environmentally friendly. When the economy is highly developed, a higher level of financial development is harmful to the environmental quality.

  18. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  19. Comparison of hydraulic conductivities for a sand and gravel aquifer in southeastern Massachusetts, estimated by three methods

    USGS Publications Warehouse

    Warren, L.P.; Church, P.E.; Turtora, Michael

    1996-01-01

    Hydraulic conductivities of a sand and gravel aquifer were estimated by three methods: constant- head multiport-permeameter tests, grain-size analyses (with the Hazen approximation method), and slug tests. Sediment cores from 45 boreholes were undivided or divided into two or three vertical sections to estimate hydraulic conductivity based on permeameter tests and grain-size analyses. The cores were collected from depth intervals in the screened zone of the aquifer in each observation well. Slug tests were performed on 29 observation wells installed in the boreholes. Hydraulic conductivities of 35 sediment cores estimated by use of permeameter tests ranged from 0.9 to 86 meters per day, with a mean of 22.8 meters per day. Hydraulic conductivities of 45 sediment cores estimated by use of grain-size analyses ranged from 0.5 to 206 meters per day, with a mean of 40.7 meters per day. Hydraulic conductivities of aquifer material at 29 observation wells estimated by use of slug tests ranged from 0.6 to 79 meters per day, with a mean of 32.9 meters per day. The repeatability of estimated hydraulic conductivities were estimated to be within 30 percent for the permeameter method, 12 percent for the grain-size method, and 9.5 percent for the slug test method. Statistical tests determined that the medians of estimates resulting from the slug tests and grain-size analyses were not significantly different but were significantly higher than the median of estimates resulting from the permeameter tests. Because the permeameter test is the only method considered which estimates vertical hydraulic conductivity, the difference in estimates may be attributed to vertical or horizontal anisotropy. The difference in the average hydraulic conductivities estimated by use of each method was less than 55 percent when compared to the estimated hydraulic conductivity determined from an aquifer test conducted near the study area.

  20. SCS-CN parameter determination using rainfall-runoff data in heterogeneous watersheds. The two-CN system approach

    NASA Astrophysics Data System (ADS)

    Soulis, K. X.; Valiantzas, J. D.

    2011-10-01

    The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN values can be estimated by being selected from tables. However, it is more accurate to estimate the CN value from measured rainfall-runoff data (assumed available) in a watershed. Previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. They suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the novel hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of the inevitable presence of soil-cover complex spatial variability along watersheds is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behavior of the CN-rainfall function produced by the proposed two-CN system concept is approached theoretically, it is analyzed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous original method based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.

  1. Speckle variance OCT for depth resolved assessment of the viability of bovine embryos

    PubMed Central

    Caujolle, S.; Cernat, R.; Silvestri, G.; Marques, M. J.; Bradu, A.; Feuchter, T.; Robinson, G.; Griffin, D. K.; Podoleanu, A.

    2017-01-01

    The morphology of embryos produced by in vitro fertilization (IVF) is commonly used to estimate their viability. However, imaging by standard microscopy is subjective and unable to assess the embryo on a cellular scale after compaction. Optical coherence tomography is an imaging technique that can produce a depth-resolved profile of a sample and can be coupled with speckle variance (SV) to detect motion on a micron scale. In this study, day 7 post-IVF bovine embryos were observed either short-term (10 minutes) or long-term (over 18 hours) and analyzed by swept source OCT and SV to resolve their depth profile and characterize micron-scale movements potentially associated with viability. The percentage of en face images showing movement at any given time was calculated as a method to detect the vital status of the embryo. This method could be used to measure the levels of damage sustained by an embryo, for example after cryopreservation, in a rapid and non-invasive way. PMID:29188109

  2. Cyclic Hardness Test PHYBALCHT: A New Short-Time Procedure to Estimate Fatigue Properties of Metallic Materials

    NASA Astrophysics Data System (ADS)

    Kramer, Hendrik; Klein, Marcus; Eifler, Dietmar

    Conventional methods to characterize the fatigue behavior of metallic materials are very time and cost consuming. That is why the new short-time procedure PHYBALCHT was developed at the Institute of Materials Science and Engineering at the University of Kaiserslautern. This innovative method requires only a planar material surface to perform cyclic force-controlled hardness indentation tests. To characterize the cyclic elastic-plastic behavior of the test material the change of the force-indentation-depth-hysteresis is plotted versus the number of indentation cycles. In accordance to the plastic strain amplitude the indentation-depth width of the hysteresis loop is measured at half minimum force and is called plastic indentation-depth amplitude. Its change as a function of the number of cycles of indentation can be described by power-laws. One of these power-laws contains the hardening-exponentCHT e II , which correlates very well with the amount of cyclic hardening in conventional constant amplitude fatigue tests.

  3. Thermal structure of Sikhote Alin and adjacent areas based on spectral analysis of the anomalous magnetic field

    NASA Astrophysics Data System (ADS)

    Didenko, A. N.; Nosyrev, M. Yu.; Shevchenko, B. F.; Gilmanova, G. Z.

    2017-11-01

    The depth of the base of the magnetoactive layer and the geothermal gradient in the Sikhote Alin crust are estimated based on a method determining the Curie depth point of magnetoactive masses by using spectral analysis of the anomalous magnetic field. A detailed map of the geothermal gradient is constructed for the first time for the Sikhote Alin and adjacent areas of the Central Asian belt. Analysis of this map shows that the zones with a higher geothermal gradient geographically fit the areas with a higher level of seismicity.

  4. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    PubMed

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  5. An approach to parameter estimation for breast tumor by finite element method

    NASA Astrophysics Data System (ADS)

    Xu, A.-qing; Yang, Hong-qin; Ye, Zhen; Su, Yi-ming; Xie, Shu-sen

    2009-02-01

    The temperature of human body on the surface of the skin depends on the metabolic activity, the blood flow, and the temperature of the surroundings. Any abnormality in the tissue, such as the presence of a tumor, alters the normal temperature on the skin surface due to increased metabolic activity of the tumor. Therefore, abnormal skin temperature profiles are an indication of diseases such as tumor or cancer. This study is to present an approach to detect the female breast tumor and its related parameter estimations by combination the finite element method with infrared thermography for the surface temperature profile. A 2D simplified breast embedded a tumor model based on the female breast anatomical structure and physiological characteristics was first established, and then finite element method was used to analyze the heat diffuse equation for the surface temperature profiles of the breast. The genetic optimization algorithm was used to estimate the tumor parameters such as depth, size and blood perfusion by minimizing a fitness function involving the temperature profiles simulated data by finite element method to the experimental data obtained by infrared thermography. This preliminary study shows it is possible to determine the depth and the heat generation rate of the breast tumor by using infrared thermography and the optimization analysis, which may play an important role in the female breast healthcare and diseases evaluation or early detection. In order to develop the proposed methodology to be used in clinical, more accurate anatomy 3D breast geometry should be considered in further investigations.

  6. Integrating spatial and temporal oxygen data to improve the quantification of in situ petroleum biodegradation rates.

    PubMed

    Davis, Gregory B; Laslett, Dean; Patterson, Bradley M; Johnston, Colin D

    2013-03-15

    Accurate estimation of biodegradation rates during remediation of petroleum impacted soil and groundwater is critical to avoid excessive costs and to ensure remedial effectiveness. Oxygen depth profiles or oxygen consumption over time are often used separately to estimate the magnitude and timeframe for biodegradation of petroleum hydrocarbons in soil and subsurface environments. Each method has limitations. Here we integrate spatial and temporal oxygen concentration data from a field experiment to develop better estimates and more reliably quantify biodegradation rates. During a nine-month bioremediation trial, 84 sets of respiration rate data (where aeration was halted and oxygen consumption was measured over time) were collected from in situ oxygen sensors at multiple locations and depths across a diesel non-aqueous phase liquid (NAPL) contaminated subsurface. Additionally, detailed vertical soil moisture (air-filled porosity) and NAPL content profiles were determined. The spatial and temporal oxygen concentration (respiration) data were modeled assuming one-dimensional diffusion of oxygen through the soil profile which was open to the atmosphere. Point and vertically averaged biodegradation rates were determined, and compared to modeled data from a previous field trial. Point estimates of biodegradation rates assuming no diffusion ranged up to 58 mg kg(-1) day(-1) while rates accounting for diffusion ranged up to 87 mg kg(-1) day(-1). Typically, accounting for diffusion increased point biodegradation rate estimates by 15-75% and vertically averaged rates by 60-80% depending on the averaging method adopted. Importantly, ignoring diffusion led to overestimation of biodegradation rates where the location of measurement was outside the zone of NAPL contamination. Over or underestimation of biodegradation rate estimates leads to cost implications for successful remediation of petroleum impacted sites. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  7. A Method for Qualitative Mapping of Thick Oil Spills Using Imaging Spectroscopy

    USGS Publications Warehouse

    Clark, Roger N.; Swayze, Gregg A.; Leifer, Ira; Livo, K. Eric; Lundeen, Sarah; Eastwood, Michael; Green, Robert O.; Kokaly, Raymond F.; Hoefen, Todd; Sarture, Charles; McCubbin, Ian; Roberts, Dar; Steele, Denis; Ryan, Thomas; Dominguez, Roseanne; Pearson, Neil; ,

    2010-01-01

    A method is described to create qualitative images of thick oil in oil spills on water using near-infrared imaging spectroscopy data. The method uses simple 'three-point-band depths' computed for each pixel in an imaging spectrometer image cube using the organic absorption features due to chemical bonds in aliphatic hydrocarbons at 1.2, 1.7, and 2.3 microns. The method is not quantitative because sub-pixel mixing and layering effects are not considered, which are necessary to make a quantitative volume estimate of oil.

  8. Detection of spatio-temporal change of ocean acoustic velocity for observing seafloor crustal deformation applying seismological methods

    NASA Astrophysics Data System (ADS)

    Eto, S.; Nagai, S.; Tadokoro, K.

    2011-12-01

    Our group has developed a system for observing seafloor crustal deformation with a combination of acoustic ranging and kinematic GPS positioning techniques. One of the effective factors to reduce estimation error of submarine benchmark in our system is modeling variation of ocean acoustic velocity. We estimated various 1-dimensional velocity models with depth under some constraints, because it is difficult to estimate 3-dimensional acoustic velocity structure including temporal change due to our simple acquisition procedure of acoustic ranging data. We, then, applied the joint hypocenter determination method in seismology [Kissling et al., 1994] to acoustic ranging data. We assume two conditions as constraints in inversion procedure as follows: 1) fixed acoustic velocity in deeper part because it is usually stable both in space and time, 2) each inverted velocity model should be decreased with depth. The following two remarkable spatio-temporal changes of acoustic velocity 1) variations of travel-time residuals at the same points within short time and 2) larger differences between residuals at the neighboring points, which are one's of travel-time from different benchmarks. The First results cannot be explained only by the effect of atmospheric condition change including heating by sunlight. To verify the residual variations mentioned as the second result, we have performed forward modeling of acoustic ranging data with velocity models added velocity anomalies. We calculate travel time by a pseudo-bending ray tracing method [Um and Thurber, 1987] to examine effects of velocity anomaly on the travel-time differences. Comparison between these residuals and travel-time difference in forward modeling, velocity anomaly bodies in shallower depth can make these anomalous residuals, which may indicate moving water bodies. We need to apply an acoustic velocity structure model with velocity anomaly(s) in acoustic ranging data analysis and/or to develop a new system with a large number of sea surface stations to detect them, which may be able to reduce error of seafloor benchmarker position.

  9. Estimated Depth to Ground Water and Configuration of the Water Table in the Portland, Oregon Area

    USGS Publications Warehouse

    Snyder, Daniel T.

    2008-01-01

    Reliable information on the configuration of the water table in the Portland metropolitan area is needed to address concerns about various water-resource issues, especially with regard to potential effects from stormwater injection systems such as UIC (underground injection control) systems that are either existing or planned. To help address these concerns, this report presents the estimated depth-to-water and water-table elevation maps for the Portland area, along with estimates of the relative uncertainty of the maps and seasonal water-table fluctuations. The method of analysis used to determine the water-table configuration in the Portland area relied on water-level data from shallow wells and surface-water features that are representative of the water table. However, the largest source of available well data is water-level measurements in reports filed by well constructors at the time of new well installation, but these data frequently were not representative of static water-level conditions. Depth-to-water measurements reported in well-construction records generally were shallower than measurements by the U.S. Geological Survey (USGS) in the same or nearby wells, although many depth-to-water measurements were substantially deeper than USGS measurements. Magnitudes of differences in depth-to-water measurements reported in well records and those measured by the USGS in the same or nearby wells ranged from -119 to 156 feet with a mean of the absolute value of the differences of 36 feet. One possible cause for the differences is that water levels in many wells reported in well records were not at equilibrium at the time of measurement. As a result, the analysis of the water-table configuration relied on water levels measured during the current study or used in previous USGS investigations in the Portland area. Because of the scarcity of well data in some areas, the locations of select surface-water features including major rivers, streams, lakes, wetlands, and springs representative of where the water table is at land surface were used to augment the analysis. Ground-water and surface-water data were combined for use in interpolation of the water-table configuration. Interpolation of the two representations typically used to define water-table position - depth to the water table below land surface and elevation of the water table above a datum - can produce substantially different results and may represent the end members of a spectrum of possible interpolations largely determined by the quantity of recharge and the hydraulic properties of the aquifer. Datasets of depth-to-water and water-table elevation for the current study were interpolated independently based on kriging as the method of interpolation with parameters determined through the use of semivariograms developed individually for each dataset. Resulting interpolations were then combined to create a single, averaged representation of the water-table configuration. Kriging analysis also was used to develop a map of relative uncertainty associated with the values of the water-table position. Accuracy of the depth-to-water and water-table elevation maps is dependent on various factors and assumptions pertaining to the data, the method of interpolation, and the hydrogeologic conditions of the surficial aquifers in the study area. Although the water-table configuration maps generally are representative of the conditions in the study area, the actual position of the water-table may differ from the estimated position at site-specific locations, and short-term, seasonal, and long-term variations in the differences also can be expected. The relative uncertainty map addresses some but not all possible errors associated with the analysis of the water-table configuration and does not depict all sources of uncertainty. Depth to water greater than 300 feet in the Portland area is limited to parts of the Tualatin Mountains, the foothills of the Cascade Range, and muc

  10. Flood-hazard mapping in Honduras in response to Hurricane Mitch

    USGS Publications Warehouse

    Mastin, M.C.

    2002-01-01

    The devastation in Honduras due to flooding from Hurricane Mitch in 1998 prompted the U.S. Agency for International Development, through the U.S. Geological Survey, to develop a country-wide systematic approach of flood-hazard mapping and a demonstration of the method at selected sites as part of a reconstruction effort. The design discharge chosen for flood-hazard mapping was the flood with an average return interval of 50 years, and this selection was based on discussions with the U.S. Agency for International Development and the Honduran Public Works and Transportation Ministry. A regression equation for estimating the 50-year flood discharge using drainage area and annual precipitation as the explanatory variables was developed, based on data from 34 long-term gaging sites. This equation, which has a standard error of prediction of 71.3 percent, was used in a geographic information system to estimate the 50-year flood discharge at any location for any river in the country. The flood-hazard mapping method was demonstrated at 15 selected municipalities. High-resolution digital-elevation models of the floodplain were obtained using an airborne laser-terrain mapping system. Field verification of the digital elevation models showed that the digital-elevation models had mean absolute errors ranging from -0.57 to 0.14 meter in the vertical dimension. From these models, water-surface elevation cross sections were obtained and used in a numerical, one-dimensional, steady-flow stepbackwater model to estimate water-surface profiles corresponding to the 50-year flood discharge. From these water-surface profiles, maps of area and depth of inundation were created at the 13 of the 15 selected municipalities. At La Lima only, the area and depth of inundation of the channel capacity in the city was mapped. At Santa Rose de Aguan, no numerical model was created. The 50-year flood and the maps of area and depth of inundation are based on the estimated 50-year storm tide.

  11. Rapid estimation of recharge potential in ephemeral-stream channels using electromagnetic methods, and measurements of channel and vegetation characteristics

    USGS Publications Warehouse

    Callegary, J.B.; Leenhouts, J.M.; Paretti, N.V.; Jones, Christopher A.

    2007-01-01

    To classify recharge potential (RCP) in ephemeral-stream channels, a method was developed that incorporates information about channel geometry, vegetation characteristics, and bed-sediment apparent electrical conductivity (??a). Recharge potential is not independently measurable, but is instead formulated as a site-specific, qualitative parameter. We used data from 259 transects across two ephemeral-stream channels near Sierra Vista, Arizona, a location with a semiarid climate. Seven data types were collected: ??a averaged over two depth intervals (0-3 m, and 0-6 m), channel incision depth and width, diameter-at-breast-height of the largest tree, woody-plant and grass density. A two-tiered system was used to classify a transect's RCP. In the first tier, transects were categorized by estimates of near-surface-sediment hydraulic permeability as low, moderate, or high using measurements of 0-3 m-depth ??a. Each of these categories was subdivided into low, medium, or high RCP classes using the remaining six data types, thus yielding a total of nine RCP designations. Six sites in the study area were used to compare RCP and ??a with previously measured surrogates for hydraulic permeability. Borehole-averaged percent fines showed a moderate correlation with both shallow and deep ??a measurements, however, correlation of point measurements of saturated hydraulic conductivity, percent fines, and cylinder infiltrometer measurements with ??a and RCP was generally poor. The poor correlation was probably caused by the relatively large measurement volume and spatial averaging of ??a compared with the spatially-limited point measurements. Because of the comparatively large spatial extent of measurement transects and variety of data types collected, RCP estimates can give a more complete picture of the major factors affecting recharge at a site than is possible through point or borehole-averaged estimates of hydraulic permeability alone. ?? 2007 Elsevier B.V. All rights reserved.

  12. Slip rates and spatially variable creep on faults of the northern San Andreas system inferred through Bayesian inversion of Global Positioning System data

    USGS Publications Warehouse

    Murray, Jessica R.; Minson, Sarah E.; Svarc, Jerry L.

    2014-01-01

    Fault creep, depending on its rate and spatial extent, is thought to reduce earthquake hazard by releasing tectonic strain aseismically. We use Bayesian inversion and a newly expanded GPS data set to infer the deep slip rates below assigned locking depths on the San Andreas, Maacama, and Bartlett Springs Faults of Northern California and, for the latter two, the spatially variable interseismic creep rate above the locking depth. We estimate deep slip rates of 21.5 ± 0.5, 13.1 ± 0.8, and 7.5 ± 0.7 mm/yr below 16 km, 9 km, and 13 km on the San Andreas, Maacama, and Bartlett Springs Faults, respectively. We infer that on average the Bartlett Springs fault creeps from the Earth's surface to 13 km depth, and below 5 km the creep rate approaches the deep slip rate. This implies that microseismicity may extend below the locking depth; however, we cannot rule out the presence of locked patches in the seismogenic zone that could generate moderate earthquakes. Our estimated Maacama creep rate, while comparable to the inferred deep slip rate at the Earth's surface, decreases with depth, implying a slip deficit exists. The Maacama deep slip rate estimate, 13.1 mm/yr, exceeds long-term geologic slip rate estimates, perhaps due to distributed off-fault strain or the presence of multiple active fault strands. While our creep rate estimates are relatively insensitive to choice of model locking depth, insufficient independent information regarding locking depths is a source of epistemic uncertainty that impacts deep slip rate estimates.

  13. Spatially-resolved aircraft-based quantification of methane emissions from the Fayetteville Shale Gas Play

    NASA Astrophysics Data System (ADS)

    Schwietzke, S.; Petron, G.; Conley, S. A.; Karion, A.; Tans, P. P.; Wolter, S.; King, C. W.; White, A. B.; Coleman, T.; Bianco, L.; Schnell, R. C.

    2016-12-01

    Confidence in basin scale oil and gas industry related methane (CH4) emission estimates hinges on an in-depth understanding, objective evaluation, and continued improvements of both top-down (e.g. aircraft measurement based) and bottom-up (e.g. emission inventories using facility- and/or component-level measurements) approaches. Systematic discrepancies of CH4 emission estimates between both approaches in the literature have highlighted research gaps. This paper is part of a more comprehensive study to expand and improve this reconciliation effort for a US dry shale gas play. This presentation will focus on refinements of the aircraft mass balance method to reduce the number of potential methodological biases (e.g. data and methodology). The refinements include (i) an in-depth exploration of the definition of upwind conditions and their impact on calculated downwind CH4 enhancements and total CH4 emissions, (ii) taking into account small but non-zero vertical and horizontal wind gradients in the boundary layer, and (iii) characterizing the spatial distribution of CH4 emissions in the study area using aircraft measurements. For the first time to our knowledge, we apply the aircraft mass balance method to calculate spatially resolved total CH4 emissions for 10 km x 60 km sub-regions within the study area. We identify higher-emitting sub-regions and localize repeating emission patterns as well as differences between days. The increased resolution of the top-down calculation will for the first time allow for an in-depth comparison with a spatially and temporally resolved bottom-up emission estimate based on measurements, concurrent activity data and other data sources.

  14. Algorithms and uncertainties for the determination of multispectral irradiance components and aerosol optical depth from a shipborne rotating shadowband radiometer

    NASA Astrophysics Data System (ADS)

    Witthuhn, Jonas; Deneke, Hartwig; Macke, Andreas; Bernhard, Germar

    2017-03-01

    The 19-channel rotating shadowband radiometer GUVis-3511 built by Biospherical Instruments provides automated shipborne measurements of the direct, diffuse and global spectral irradiance components without a requirement for platform stabilization. Several direct sun products, including spectral direct beam transmittance, aerosol optical depth, Ångström exponent and precipitable water, can be derived from these observations. The individual steps of the data analysis are described, and the different sources of uncertainty are discussed. The total uncertainty of the observed direct beam transmittances is estimated to be about 4 % for most channels within a 95 % confidence interval for shipborne operation. The calibration is identified as the dominating contribution to the total uncertainty. A comparison of direct beam transmittance with those obtained from a Cimel sunphotometer at a land site and a manually operated Microtops II sunphotometer on a ship is presented. Measurements deviate by less than 3 and 4 % on land and on ship, respectively, for most channels and in agreement with our previous uncertainty estimate. These numbers demonstrate that the instrument is well suited for shipborne operation, and the applied methods for motion correction work accurately. Based on spectral direct beam transmittance, aerosol optical depth can be retrieved with an uncertainty of 0.02 for all channels within a 95 % confidence interval. The different methods to account for Rayleigh scattering and gas absorption in our scheme and in the Aerosol Robotic Network processing for Cimel sunphotometers lead to minor deviations. Relying on the cross calibration of the 940 nm water vapor channel with the Cimel sunphotometer, the column amount of precipitable water can be estimated with an uncertainty of ±0.034 cm.

  15. Magnetotelluric Detection Thresholds as a Function of Leakage Plume Depth, TDS and Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X.; Buscheck, T. A.; Mansoor, K.

    We conducted a synthetic magnetotelluric (MT) data analysis to establish a set of specific thresholds of plume depth, TDS concentration and volume for detection of brine and CO 2 leakage from legacy wells into shallow aquifers in support of Strategic Monitoring Subtask 4.1 of the US DOE National Risk Assessment Partnership (NRAP Phase II), which is to develop geophysical forward modeling tools. 900 synthetic MT data sets span 9 plume depths, 10 TDS concentrations and 10 plume volumes. The monitoring protocol consisted of 10 MT stations in a 2×5 grid laid out along the flow direction. We model the MTmore » response in the audio frequency range of 1 Hz to 10 kHz with a 50 Ωm baseline resistivity and the maximum depth up to 2000 m. Scatter plots show the MT detection thresholds for a trio of plume depth, TDS concentration and volume. Plumes with a large volume and high TDS located at a shallow depth produce a strong MT signal. We demonstrate that the MT method with surface based sensors can detect a brine and CO 2 plume so long as the plume depth, TDS concentration and volume are above the thresholds. However, it is unlikely to detect a plume at a depth larger than 1000 m with the change of TDS concentration smaller than 10%. Simulated aquifer impact data based on the Kimberlina site provides a more realistic view of the leakage plume distribution than rectangular synthetic plumes in this sensitivity study, and it will be used to estimate MT responses over simulated brine and CO 2 plumes and to evaluate the leakage detectability. Integration of the simulated aquifer impact data and the MT method into the NRAP DREAM tool may provide an optimized MT survey configuration for MT data collection. This study presents a viable approach for sensitivity study of geophysical monitoring methods for leakage detection. The results come in handy for rapid assessment of leakage detectability.« less

  16. Estimation of seismic attenuation in carbonate rocks using three different methods: Application on VSP data from Abu Dhabi oilfield

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.; Matsushima, J.

    2016-06-01

    In this study a relationship between the seismic wavelength and the scale of heterogeneity in the propagating medium has been examined. The relationship estimates the size of heterogeneity that significantly affects the wave propagation at a specific frequency, and enables a decrease in the calculation time of wave scattering estimation. The relationship was applied in analyzing synthetic and Vertical Seismic Profiling (VSP) data obtained from an onshore oilfield in the Emirate of Abu Dhabi, United Arab Emirates. Prior to estimation of the attenuation, a robust processing workflow was applied to both synthetic and recorded data to increase the Signal-to-Noise Ratio (SNR). Two conventional methods of spectral ratio and centroid frequency shift methods were applied to estimate the attenuation from the extracted seismic waveforms in addition to a new method based on seismic interferometry. The attenuation profiles derived from the three approaches demonstrated similar variation, however the interferometry method resulted in greater depth resolution, differences in attenuation magnitude. Furthermore, the attenuation profiles revealed significant contribution of scattering on seismic wave attenuation. The results obtained from the seismic interferometry method revealed estimated scattering attenuation ranges from 0 to 0.1 and estimated intrinsic attenuation can reach 0.2. The subsurface of the studied zones is known to be highly porous and permeable, which suggest that the mechanism of the intrinsic attenuation is probably the interactions between pore fluids and solids.

  17. Stratospheric aerosol optical depths, 1850-1990

    NASA Technical Reports Server (NTRS)

    Sato, Makiko; Hansen, James E.; Mccormick, M. Patrick; Pollack, James B.

    1993-01-01

    A global stratospheric aerosol database employed for climate simulations is described. For the period 1883-1990, aerosol optical depths are estimated from optical extinction data, whose quality increases with time over that period. For the period 1850-1882, aerosol optical depths are more crudely estimated from volcanological evidence for the volume of ejecta from major known volcanoes. The data set is available over Internet.

  18. Detection scheme for a partially occluded pedestrian based on occluded depth in lidar-radar sensor fusion

    NASA Astrophysics Data System (ADS)

    Kwon, Seong Kyung; Hyun, Eugin; Lee, Jin-Hee; Lee, Jonghun; Son, Sang Hyuk

    2017-11-01

    Object detections are critical technologies for the safety of pedestrians and drivers in autonomous vehicles. Above all, occluded pedestrian detection is still a challenging topic. We propose a new detection scheme for occluded pedestrian detection by means of lidar-radar sensor fusion. In the proposed method, the lidar and radar regions of interest (RoIs) have been selected based on the respective sensor measurement. Occluded depth is a new means to determine whether an occluded target exists or not. The occluded depth is a region projected out by expanding the longitudinal distance with maintaining the angle formed by the outermost two end points of the lidar RoI. The occlusion RoI is the overlapped region made by superimposing the radar RoI and the occluded depth. The object within the occlusion RoI is detected by the radar measurement information and the occluded object is estimated as a pedestrian based on human Doppler distribution. Additionally, various experiments are performed in detecting a partially occluded pedestrian in outdoor as well as indoor environments. According to experimental results, the proposed sensor fusion scheme has much better detection performance compared to the case without our proposed method.

  19. Theoretical study of depth profiling with gamma- and X-ray spectrometry based on measurements of intensity ratios

    NASA Astrophysics Data System (ADS)

    Bártová, H.; Trojek, T.; Johnová, K.

    2017-11-01

    This article describes the method for the estimation of depth distribution of radionuclides in a material with gamma-ray spectrometry, and the identification of a layered structure of a material with X-ray fluorescence analysis. This method is based on the measurement of a ratio of two gamma or X-ray lines of a radionuclide or a chemical element, respectively. Its principle consists in different attenuation coefficient for these two lines in a measured material. The main aim of this investigation was to show how the detected ratio of these two lines depends on depth distribution of an analyte and mainly how this ratio depends on density and chemical composition of measured materials. Several different calculation arrangements were made and a lot of Monte Carlo simulation with the code MCNP - Monte Carlo N-Particle (Briesmeister, 2000) was performed to answer these questions. For X-ray spectrometry, the calculated Kα/Kβ diagrams were found to be almost independent upon matrix density and composition. Thanks to this phenomenon it would be possible to draw only one Kα/Kβ diagram for an element whose depth distribution is examined.

  20. Recent changes in Red Lake (Romania) sedimentation rate determined from depth profiles of 210Pb and 137Cs radioisotopes.

    PubMed

    Begy, R; Cosma, C; Timar, A

    2009-08-01

    This work presents a first estimation of the sedimentation rate for the Red Lake (Romania). The sediment accumulation rates were determined by two well-known methods for recent sediment dating: (210)Pb and (137)Cs methods. Both techniques implied used the gamma emission of the above-mentioned radionuclides. The (210)Pb and (137)Cs concentrations in the sediment were measured using a gamma spectrometer with a HpGe detector, Gamma-X type. Activities ranging from 41+/-7 to 135+/-34Bq/kg were found for (210)Pb and from 3+/-0.5 to 1054+/-150Bq/kg for (137)Cs. The sediment profile indicates acceleration in sedimentation rate in the last 18 years. Thus, the sedimentation process for the Red Lake can be divided in two periods, the last 18 years, and respectively, the period before that. Using the Constant Rate of (210)Pb Supply method values between 0.18+/-0.04 and 1.85+/-0.5g/cm(2) year (0.32+/-0.08 and 2.83+/-0.7cm/year) were obtained. Considering both periods, an average sedimentation rate of 0.87+/-0.17g/cm(2) year (1.17cm/year) was calculated. Considering an average depth of 5.41m for the lake and the sedimentation rate estimated for the last 18 years, it could be estimated that the lake will disappear in 195 years.

Top