Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Technique for estimating depth of floods in Tennessee
Gamble, C.R.
1983-01-01
Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)
Dilbone, Elizabeth; Legleiter, Carl; Alexander, Jason S.; McElroy, Brandon
2018-01-01
Methods for spectrally based mapping of river bathymetry have been developed and tested in clear‐flowing, gravel‐bed channels, with limited application to turbid, sand‐bed rivers. This study used hyperspectral images and field surveys from the dynamic, sandy Niobrara River to evaluate three depth retrieval methods. The first regression‐based approach, optimal band ratio analysis (OBRA), paired in situ depth measurements with image pixel values to estimate depth. The second approach used ground‐based field spectra to calibrate an OBRA relationship. The third technique, image‐to‐depth quantile transformation (IDQT), estimated depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image‐derived variable. OBRA yielded the lowest depth retrieval mean error (0.005 m) and highest observed versus predicted R2 (0.817). Although misalignment between field and image data did not compromise the performance of OBRA in this study, poor georeferencing could limit regression‐based approaches such as OBRA in dynamic, sand‐bedded rivers. Field spectroscopy‐based depth maps exhibited a mean error with a slight shallow bias (0.068 m) but provided reliable estimates for most of the study reach. IDQT had a strong deep bias but provided informative relative depth maps. Overprediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the depth CDF. Although each of the techniques we tested demonstrated potential to provide accurate depth estimates in sand‐bed rivers, each method also was subject to certain constraints and limitations.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.
Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît
2011-01-01
Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.
Depth-estimation-enabled compound eyes
NASA Astrophysics Data System (ADS)
Lee, Woong-Bi; Lee, Heung-No
2018-04-01
Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.
Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.
2014-01-01
Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.
Event-Based Stereo Depth Estimation Using Belief Propagation.
Xie, Zhen; Chen, Shengyong; Orchard, Garrick
2017-01-01
Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are not typically suitable for these event-based data and new processing algorithms are required. This paper focuses on the problem of depth estimation from a stereo pair of event-based sensors. A fully event-based stereo depth estimation algorithm which relies on message passing is proposed. The algorithm not only considers the properties of a single event but also uses a Markov Random Field (MRF) to consider the constraints between the nearby events, such as disparity uniqueness and depth continuity. The method is tested on five different scenes and compared to other state-of-art event-based stereo matching methods. The results show that the method detects more stereo matches than other methods, with each match having a higher accuracy. The method can operate in an event-driven manner where depths are reported for individual events as they are received, or the network can be queried at any time to generate a sparse depth frame which represents the current state of the network.
Radiance Assimilation Shows Promise for Snowpack Characterization: A 1-D Case Study
NASA Technical Reports Server (NTRS)
Durand, Michael; Kim, Edward; Margulis, Steve
2008-01-01
We demonstrate an ensemble-based radiometric data assimilation (DA) methodology for estimating snow depth and snow grain size using ground-based passive microwave (PM) observations at 18.7 and 36.5 GHz collected during the NASA CLPX-1, March 2003, Colorado, USA. A land surface model was used to develop a prior estimate of the snowpack states, and a radiative transfer model was used to relate the modeled states to the observations. Snow depth bias was -53.3 cm prior to the assimilation, and -7.3 cm after the assimilation. Snow depth estimated by a non-DA-based retrieval algorithm using the same PM data had a bias of -18.3 cm. The sensitivity of the assimilation scheme to the grain size uncertainty was evaluated; over the range of grain size uncertainty tested, the posterior snow depth estimate bias ranges from -2.99 cm to -9.85 cm, which is uniformly better than both the prior and retrieval estimates. This study demonstrates the potential applicability of radiometric DA at larger scales.
NASA Astrophysics Data System (ADS)
Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.
2005-05-01
A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.
A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences
Zhu, Youding; Fujimura, Kikuo
2010-01-01
This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933
NASA Astrophysics Data System (ADS)
Boisson, Guillaume; Kerbiriou, Paul; Drazic, Valter; Bureller, Olivier; Sabater, Neus; Schubert, Arno
2014-03-01
Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.
Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding
NASA Astrophysics Data System (ADS)
Oh, Kwan-Jung; Oh, Byung Tae
2015-04-01
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.
The volume and mean depth of Earth's lakes
NASA Astrophysics Data System (ADS)
Cael, B. B.; Heathcote, A. J.; Seekell, D. A.
2017-01-01
Global lake volume estimates are scarce, highly variable, and poorly documented. We developed a rigorous method for estimating global lake depth and volume based on the Hurst coefficient of Earth's surface, which provides a mechanistic connection between lake area and volume. Volume-area scaling based on the Hurst coefficient is accurate and consistent when applied to lake data sets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3). This volume is in the range of historical estimates (166,000-280,000 km3), but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62-151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles.
Evaluation of bursal depth as an indicator of age class of harlequin ducks
Mather, D.D.; Esler, Daniel N.
1999-01-01
We contrasted the estimated age class of recaptured Harlequin Ducks (Histrionicus histrionicus) (n = 255) based on bursal depth with expected age class based on bursal depth at first capture and time since first capture. Although neither estimated nor expected ages can be assumed to be correct, rates of discrepancies between the two for within-year recaptures indicate sampling error, while between-year recaptures test assumptions about rates of bursal involution. Within-year, between-year, and overall discrepancy rates were 10%, 24%, and 18%, respectively. Most (86%) between-year discrepancies occurred for birds expected to be after-third-year (ATY) but estimated to be third-year (TY). Of these ATY-TY discrepancies, 22 of 25 (88%) birds had bursal depths of 2 or 3 mm. Further, five of six between-year recaptures that were known to be ATY but estimated to be TY had 2 mm bursas. Reclassifying birds with 2 or 3 mm bursas as ATY resulted in reduction in between-year (24% to 10%) and overall (18% to 11%) discrepancy rates. We conclude that age determination of Harlequin Ducks based on bursal depth, particularly using our modified criteria, is a relatively consistent and reliable technique.
Temporal Surface Reconstruction
1991-05-03
and the convergence cannot be guaranteed. Maybank [68] investigated alternative incremental schemes for the estimation of feature locations from a...depth from image sequences. International Journal of Computer Vision, 3, 1989. [68] S. J. Maybank . Filter based estimates of depth. In Proceedings of the
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
NASA Astrophysics Data System (ADS)
Kelly, R. E. J.; Saberi, N.; Li, Q.
2017-12-01
With moderate to high spatial resolution (<1 km) regional to global snow water equivalent (SWE) observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real-time in situ and model data.
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
Deep learning-based depth estimation from a synthetic endoscopy image training set
NASA Astrophysics Data System (ADS)
Mahmood, Faisal; Durr, Nicholas J.
2018-03-01
Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.
The maximum economic depth of groundwater abstraction for irrigation
NASA Astrophysics Data System (ADS)
Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.
2017-12-01
Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.
Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.
Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso
2018-07-01
There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.
NASA Astrophysics Data System (ADS)
Palevsky, Hilary I.; Doney, Scott C.
2018-05-01
Estimated rates and efficiency of ocean carbon export flux are sensitive to differences in the depth horizons used to define export, which often vary across methodological approaches. We evaluate sinking particulate organic carbon (POC) flux rates and efficiency (e-ratios) in a global earth system model, using a range of commonly used depth horizons: the seasonal mixed layer depth, the particle compensation depth, the base of the euphotic zone, a fixed depth horizon of 100 m, and the maximum annual mixed layer depth. Within this single dynamically consistent model framework, global POC flux rates vary by 30% and global e-ratios by 21% across different depth horizon choices. Zonal variability in POC flux and e-ratio also depends on the export depth horizon due to pronounced influence of deep winter mixing in subpolar regions. Efforts to reconcile conflicting estimates of export need to account for these systematic discrepancies created by differing depth horizon choices.
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
The Effect of Finite Thickness Extent on Estimating Depth to Basement from Aeromagnetic Data
NASA Astrophysics Data System (ADS)
Blakely, R. J.; Salem, A.; Green, C. M.; Fairhead, D.; Ravat, D.
2014-12-01
Depth to basement estimation methods using various components of the spectral content of magnetic anomalies are in common use by geophysicists. Examples of these are the Tilt-Depth and SPI methods. These methods use simple models having the base of the magnetic body at infinity. Recent publications have shown that this 'infinite depth' assumption causes underestimation of the depth to the top of sources, especially in areas where the bottom of the magnetic layer is shallow, as would occur in high heat-flow regions. This error has been demonstrated in both model studies and using real data with seismic or well control. To overcome the limitation of infinite depth this contribution presents the mathematics for a finite depth contact body in the Tilt depth and SPI methods and applies it to the central Red Sea where the Curie isotherm and Moho are shallow. The difference in the depth estimation between the infinite and finite contacts is such a case is significant and can exceed 200%.
Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.
Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves
2016-09-01
This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.
A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-02-08
Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.
A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination
2018-01-01
Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759
Huizinga, Richard J.; Rydlund, Jr., Paul H.
2004-01-01
The evaluation of scour at bridges throughout the state of Missouri has been ongoing since 1991 in a cooperative effort by the U.S. Geological Survey and Missouri Department of Transportation. A variety of assessment methods have been used to identify bridges susceptible to scour and to estimate scour depths. A potential-scour assessment (Level 1) was used at 3,082 bridges to identify bridges that might be susceptible to scour. A rapid estimation method (Level 1+) was used to estimate contraction, pier, and abutment scour depths at 1,396 bridge sites to identify bridges that might be scour critical. A detailed hydraulic assessment (Level 2) was used to compute contraction, pier, and abutment scour depths at 398 bridges to determine which bridges are scour critical and would require further monitoring or application of scour countermeasures. The rapid estimation method (Level 1+) was designed to be a conservative estimator of scour depths compared to depths computed by a detailed hydraulic assessment (Level 2). Detailed hydraulic assessments were performed at 316 bridges that also had received a rapid estimation assessment, providing a broad data base to compare the two scour assessment methods. The scour depths computed by each of the two methods were compared for bridges that had similar discharges. For Missouri, the rapid estimation method (Level 1+) did not provide a reasonable conservative estimate of the detailed hydraulic assessment (Level 2) scour depths for contraction scour, but the discrepancy was the result of using different values for variables that were common to both of the assessment methods. The rapid estimation method (Level 1+) was a reasonable conservative estimator of the detailed hydraulic assessment (Level 2) scour depths for pier scour if the pier width is used for piers without footing exposure and the footing width is used for piers with footing exposure. Detailed hydraulic assessment (Level 2) scour depths were conservatively estimated by the rapid estimation method (Level 1+) for abutment scour, but there was substantial variability in the estimates and several substantial underestimations.
Flaw depth sizing using guided waves
NASA Astrophysics Data System (ADS)
Cobb, Adam C.; Fisher, Jay L.
2016-02-01
Guided wave inspection technology is most often applied as a survey tool for pipeline inspection, where relatively low frequency ultrasonic waves, compared to those used in conventional ultrasonic nondestructive evaluation (NDE) methods, propagate along the structure; discontinuities cause a reflection of the sound back to the sensor for flaw detection. Although the technology can be used to accurately locate a flaw over long distances, the flaw sizing performance, especially for flaw depth estimation, is much poorer than other, local NDE approaches. Estimating flaw depth, as opposed to other parameters, is of particular interest for failure analysis of many structures. At present, most guided wave technologies estimate the size of the flaw based on the reflected signal amplitude from the flaw compared to a known geometry reflection, such as a circumferential weld in a pipeline. This process, however, requires many assumptions to be made, such as weld geometry and flaw shape. Furthermore, it is highly dependent on the amplitude of the flaw reflection, which can vary based on many factors, such as attenuation and sensor installation. To improve sizing performance, especially depth estimation, and do so in a way that is not strictly amplitude dependent, this paper describes an approach to estimate the depth of a flaw based on a multimodal analysis. This approach eliminates the need of using geometric reflections for calibration and can be used for both pipeline and plate inspection applications. To verify the approach, a test set was manufactured on plate specimens with flaws of different widths and depths ranging from 5% to 100% of total wall thickness; 90% of these flaws were sized to within 15% of their true value. A description of the initial multimodal sizing strategy and results will be discussed.
Estimation of global snow cover using passive microwave data
NASA Astrophysics Data System (ADS)
Chang, Alfred T. C.; Kelly, Richard E.; Foster, James L.; Hall, Dorothy K.
2003-04-01
This paper describes an approach to estimate global snow cover using satellite passive microwave data. Snow cover is detected using the high frequency scattering signal from natural microwave radiation, which is observed by passive microwave instruments. Developed for the retrieval of global snow depth and snow water equivalent using Advanced Microwave Scanning Radiometer EOS (AMSR-E), the algorithm uses passive microwave radiation along with a microwave emission model and a snow grain growth model to estimate snow depth. The microwave emission model is based on the Dense Media Radiative Transfer (DMRT) model that uses the quasi-crystalline approach and sticky particle theory to predict the brightness temperature from a single layered snowpack. The grain growth model is a generic single layer model based on an empirical approach to predict snow grain size evolution with time. Gridding to the 25 km EASE-grid projection, a daily record of Special Sensor Microwave Imager (SSM/I) snow depth estimates was generated for December 2000 to March 2001. The estimates are tested using ground measurements from two continental-scale river catchments (Nelson River and the Ob River in Russia). This regional-scale testing of the algorithm shows that for passive microwave estimates, the average daily snow depth retrieval standard error between estimated and measured snow depths ranges from 0 cm to 40 cm of point observations. Bias characteristics are different for each basin. A fraction of the error is related to uncertainties about the grain growth initialization states and uncertainties about grain size changes through the winter season that directly affect the parameterization of the snow depth estimation in the DMRT model. Also, the algorithm does not include a correction for forest cover and this effect is clearly observed in the retrieval. Finally, error is also related to scale differences between in situ ground measurements and area-integrated satellite estimates. With AMSR-E data, improvements to snow depth and water equivalent estimates are expected since AMSR-E will have twice the spatial resolution of the SSM/I and will be able to characterize better the subnivean snow environment from an expanded range of microwave frequencies.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
Impact of planetary boundary layer turbulence on model climate and tracer transport
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.; Ott, L. E.; Pawson, S.
2014-12-01
Planetary boundary layer (PBL) processes are important for weather, climate, and tracer transport and concentration. One measure of the strength of these processes is the PBL depth. However, no single PBL depth definition exists and several studies have found that the estimated depth can vary substantially based on the definition used. In the Goddard Earth Observing System (GEOS-5) atmospheric general circulation model, the PBL depth is particularly important because it is used to calculate the turbulent length scale that is used in the estimation of turbulent mixing. This study analyzes the impact of using three different PBL depth definitions in this calculation. Two definitions are based on the scalar eddy diffusion coefficient and the third is based on the bulk Richardson number. Over land, the bulk Richardson number definition estimates shallower nocturnal PBLs than the other estimates while over water this definition generally produces deeper PBLs. The near surface wind velocity, temperature, and specific humidity responses to the change in turbulence are spatially and temporally heterogeneous, resulting in changes to tracer transport and concentrations. Near surface wind speed increases in the bulk Richardson number experiment cause Saharan dust increases on the order of 1 × 10-4 kg m-2 downwind over the Atlantic Ocean. Carbon monoxide (CO) surface concentrations are modified over Africa during boreal summer, producing differences on the order of 20 ppb, due to the model's treatment of emissions from biomass burning. While differences in carbon dioxide (CO2) are small in the time mean, instantaneous differences are on the order of 10 ppm and these are especially prevalent at high latitude during boreal winter. Understanding the sensitivity of trace gas and aerosol concentration estimates to PBL depth is important for studies seeking to calculate surface fluxes based on near-surface concentrations and to studies projecting future concentrations.
Impact of planetary boundary layer turbulence on model climate and tracer transport
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.; Ott, L. E.; Pawson, S.
2015-07-01
Planetary boundary layer (PBL) processes are important for weather, climate, and tracer transport and concentration. One measure of the strength of these processes is the PBL depth. However, no single PBL depth definition exists and several studies have found that the estimated depth can vary substantially based on the definition used. In the Goddard Earth Observing System (GEOS-5) atmospheric general circulation model, the PBL depth is particularly important because it is used to calculate the turbulent length scale that is used in the estimation of turbulent mixing. This study analyzes the impact of using three different PBL depth definitions in this calculation. Two definitions are based on the scalar eddy diffusion coefficient and the third is based on the bulk Richardson number. Over land, the bulk Richardson number definition estimates shallower nocturnal PBLs than the other estimates while over water this definition generally produces deeper PBLs. The near-surface wind velocity, temperature, and specific humidity responses to the change in turbulence are spatially and temporally heterogeneous, resulting in changes to tracer transport and concentrations. Near-surface wind speed increases in the bulk Richardson number experiment cause Saharan dust increases on the order of 1 × 10-4 kg m-2 downwind over the Atlantic Ocean. Carbon monoxide (CO) surface concentrations are modified over Africa during boreal summer, producing differences on the order of 20 ppb, due to the model's treatment of emissions from biomass burning. While differences in carbon dioxide (CO2) are small in the time mean, instantaneous differences are on the order of 10 ppm and these are especially prevalent at high latitude during boreal winter. Understanding the sensitivity of trace gas and aerosol concentration estimates to PBL depth is important for studies seeking to calculate surface fluxes based on near-surface concentrations and for studies projecting future concentrations.
Milker, Yvonne; Weinkauf, Manuel F G; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0-200 m) but far less accuracy of 130 m for deeper water depths (200-1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene.
Weinkauf, Manuel F. G.; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J.; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0–200 m) but far less accuracy of 130 m for deeper water depths (200–1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene. PMID:29166653
Estimating floodwater depths from flood inundation maps and topography
Cohen, Sagy; Brakenridge, G. Robert; Kettner, Albert; Bates, Bradford; Nelson, Jonathan M.; McDonald, Richard R.; Huang, Yu-Fen; Munasinghe, Dinuke; Zhang, Jiaqi
2018-01-01
Information on flood inundation extent is important for understanding societal exposure, water storage volumes, flood wave attenuation, future flood hazard, and other variables. A number of organizations now provide flood inundation maps based on satellite remote sensing. These data products can efficiently and accurately provide the areal extent of a flood event, but do not provide floodwater depth, an important attribute for first responders and damage assessment. Here we present a new methodology and a GIS-based tool, the Floodwater Depth Estimation Tool (FwDET), for estimating floodwater depth based solely on an inundation map and a digital elevation model (DEM). We compare the FwDET results against water depth maps derived from hydraulic simulation of two flood events, a large-scale event for which we use medium resolution input layer (10 m) and a small-scale event for which we use a high-resolution (LiDAR; 1 m) input. Further testing is performed for two inundation maps with a number of challenging features that include a narrow valley, a large reservoir, and an urban setting. The results show FwDET can accurately calculate floodwater depth for diverse flooding scenarios but also leads to considerable bias in locations where the inundation extent does not align well with the DEM. In these locations, manual adjustment or higher spatial resolution input is required.
Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin
Casto, Daniel W.
2001-01-01
Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.
NASA Astrophysics Data System (ADS)
Haskell, William Z.; Fleming, John C.
2018-07-01
Net community production (NCP) represents the amount of biologically-produced organic carbon that is available to be exported out of the surface ocean and is typically estimated using measurements of the O2/Ar ratio in the surface mixed layer under the assumption of negligible vertical transport. However, physical processes can significantly bias NCP estimates based on this in-situ tracer. It is actively debated whether discrepancies between O2/Ar-based NCP and carbon export estimates are due to differences in the location of biological production and export, or the result of physical biases. In this study, we calculate export production across the euphotic depth during two months of upwelling in Southern California in 2014, based on an estimate of the consumption rate of dissolved organic carbon (DOC) and the dissolved: total organic carbon consumption ratio below the euphotic depth. This estimate equals the concurrent O2/Ar-based NCP estimates over the same period that are corrected for physical biases, but is significantly different than NCP estimated without a correction for vertical transport. This comparison demonstrates that concurrent physical transport estimates would significantly improve O2/Ar-based estimates of NCP, particularly in settings with vertical advection. Potential approaches to mitigate this bias are discussed.
NASA Astrophysics Data System (ADS)
Dilbone, Elizabeth K.
Methods for spectrally-based bathymetric mapping of rivers mainly have been developed and tested on clear-flowing, gravel bedded channels, with limited application to turbid, sand-bedded rivers. Using hyperspectral images of the Niobrara River, Nebraska, and field-surveyed depth data, this study evaluated three methods of retrieving depth from remotely sensed data in a dynamic, sand-bedded channel. The first regression-based approach paired in situ depth measurements and image pixel values to predict depth via Optimal Band Ratio Analysis (OBRA). The second approach used ground-based reflectance measurements to calibrate an OBRA relationship. For this approach, CASI images were atmospherically corrected to units of apparent surface reflectance using an empirical line calibration. For the final technique, we used Image-to-Depth Quantile Transformation (IDQT) to predict depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image derived variable. OBRA yielded the lowest overall depth retrieval error (0.0047 m) and highest observed versus predicted R2 (0.81). Although misalignment between field and image data were not problematic to OBRA's performance in this study, such issues present potential limitations to standard regression-based approaches like OBRA in dynamic, sand-bedded rivers. Field spectroscopy-based maps exhibited a slight shallow bias (0.0652 m) but provided reliable depth estimates for most of the study reach. IDQT had a strong deep bias, but still provided informative relative depth maps that portrayed general patterns of shallow and deep areas of the channel. The over-prediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the CDF of depth. While each of the techniques tested in this study demonstrated the potential to provide accurate depth estimates in sand-bedded rivers, each method also was subject to certain constraints and limitations.
Estimation of subsurface thermal structure using sea surface height and sea surface temperature
NASA Technical Reports Server (NTRS)
Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)
2012-01-01
A method of determining a subsurface temperature in a body of water is disclosed. The method includes obtaining surface temperature anomaly data and surface height anomaly data of the body of water for a region of interest, and also obtaining subsurface temperature anomaly data for the region of interest at a plurality of depths. The method further includes regressing the obtained surface temperature anomaly data and surface height anomaly data for the region of interest with the obtained subsurface temperature anomaly data for the plurality of depths to generate regression coefficients, estimating a subsurface temperature at one or more other depths for the region of interest based on the generated regression coefficients and outputting the estimated subsurface temperature at the one or more other depths. Using the estimated subsurface temperature, signal propagation times and trajectories of marine life in the body of water are determined.
NASA Astrophysics Data System (ADS)
Stachnik, J.; Rozhkov, M.; Baker, B.; Bobrov, D.; Friberg, P. A.
2015-12-01
Depth of event is an important criterion of seismic event screening at the International Data Center, CTBTO. However, a thorough determination of the event depth can be conducted mostly through special analysis because the IDC's Event Definition Criteria is based, in particular, on depth estimation uncertainties. This causes a large number of events in the Reviewed Event Bulletin to have depth constrained to the surface. When the true origin depth is greater than that reasonable for a nuclear test (3 km based on existing observations), this may result in a heavier workload to manually distinguish between shallow and deep events. Also, IDC depth criterion is not applicable to the events with the small t(pP-P) travel time difference, which is the case of the nuclear test. Since the shape of the first few seconds of signal of very shallow events is very sensitive to the presence of the depth phase, cross correlation between observed and theoretic seismogram can provide an estimate for the depth of the event, and so provide an expansion to the screening process. We exercised this approach mostly with events at teleseismic and partially regional distances. We found that such approach can be very efficient for the seismic event screening process, with certain caveats related mostly to the poorly defined crustal models at source and receiver which can shift the depth estimate. We used adjustable t* teleseismic attenuation model for synthetics since this characteristic is not determined for most of the rays we studied. We studied a wide set of historical records of nuclear explosions, including so called Peaceful Nuclear Explosions (PNE) with presumably known depths, and recent DPRK nuclear tests. The teleseismic synthetic approach is based on the stationary phase approximation with Robert Herrmann's hudson96 program, and the regional modelling was done with the generalized ray technique by Vlastislav Cerveny modified to the complex source topography.
The design and implementation of postprocessing for depth map on real-time extraction system.
Tang, Zhiwei; Li, Bin; Li, Huosheng; Xu, Zheng
2014-01-01
Depth estimation becomes the key technology to resolve the communications of the stereo vision. We can get the real-time depth map based on hardware, which cannot implement complicated algorithm as software, because there are some restrictions in the hardware structure. Eventually, some wrong stereo matching will inevitably exist in the process of depth estimation by hardware, such as FPGA. In order to solve the problem a postprocessing function is designed in this paper. After matching cost unique test, the both left-right and right-left consistency check solutions are implemented, respectively; then, the cavities in depth maps can be filled by right depth values on the basis of right-left consistency check solution. The results in the experiments have shown that the depth map extraction and postprocessing function can be implemented in real time in the same system; what is more, the quality of the depth maps is satisfactory.
NASA Astrophysics Data System (ADS)
Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.
2017-12-01
This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.
NASA Astrophysics Data System (ADS)
Han, Cheongho
2005-11-01
Currently, gravitational microlensing survey experiments toward the Galactic bulge field use two different methods of minimizing the blending effect for the accurate determination of the optical depth τ. One is measuring τ based on clump giant (CG) source stars, and the other is using ``difference image analysis'' (DIA) photometry to measure the unblended source flux variation. Despite the expectation that the two estimates should be the same assuming that blending is properly considered, the estimates based on CG stars systematically fall below the DIA results based on all events with source stars down to the detection limit. Prompted by the gap, we investigate the previously unconsidered effect of companion-associated events on τ determination. Although the image of a companion is blended with that of its primary star and thus not resolved, the event associated with the companion can be detected if the companion flux is highly magnified. Therefore, companions work effectively as source stars to microlensing, and thus the neglect of them in the source star count could result in a wrong τ estimation. By carrying out simulations based on the assumption that companions follow the same luminosity function as primary stars, we estimate that the contribution of the companion-associated events to the total event rate is ~5fbi% for current surveys and can reach up to ~6fbi% for future surveys monitoring fainter stars, where fbi is the binary frequency. Therefore, we conclude that the companion-associated events comprise a nonnegligible fraction of all events. However, their contribution to the optical depth is not large enough to explain the systematic difference between the optical depth estimates based on the two different methods.
A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion
NASA Astrophysics Data System (ADS)
Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen
2017-09-01
In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.
The determination of total burn surface area: How much difference?
Giretzlehner, M; Dirnberger, J; Owen, R; Haller, H L; Lumenta, D B; Kamolz, L-P
2013-09-01
Burn depth and burn size are crucial determinants for assessing patients suffering from burns. Therefore, a correct evaluation of these factors is optimal for adapting the appropriate treatment in modern burn care. Burn surface assessment is subject to considerable differences among clinicians. This work investigated the accuracy among experts based on conventional surface estimation methods (e.g. "Rule of Palm", "Rule of Nines" or "Lund-Browder Chart"). The estimation results were compared to a computer-based evaluation method. Survey data was collected during one national and one international burn conference. The poll confirmed deviations of burn depth/size estimates of up to 62% in relation to the mean value of all participants. In comparison to the computer-based method, overestimation of up to 161% was found. We suggest introducing improved methods for burn depth/size assessment in clinical routine in order to efficiently allocate and distribute the available resources for practicing burn care. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.
Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.
Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui
2017-01-01
Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-07-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Comparison of GEOS-5 AGCM Planetary Boundary Layer Depths Computed with Various Definitions
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, E. L.; Molod, A.
2014-01-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Koppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-03-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods
Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste; ...
2017-04-03
This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less
Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste
This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less
Research on bathymetry estimation by Worldview-2 based with the semi-analytical model
NASA Astrophysics Data System (ADS)
Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.
2015-04-01
South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
NASA Astrophysics Data System (ADS)
Simeonov, J.; Czapiga, M. J.; Holland, K. T.
2017-12-01
We developed an inversion model for river bathymetry estimation using measurements of surface currents, water surface elevation slope and shoreline position. The inversion scheme is based on explicit velocity-depth and velocity-slope relationships derived from the along-channel momentum balance and mass conservation. The velocity-depth relationship requires the discharge value to quantitatively relate the depth to the measured velocity field. The ratio of the discharge and the bottom friction enter as a coefficient in the velocity-slope relationship and is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. Completing the inversion requires an estimate of the bulk friction, which in the case of sand bed rivers is a strong function of the size of dune bedforms. We explored the accuracy of existing and new empirical closures that relate the bulk roughness to parameters such as the median grain size diameter, ratio of shear velocity to sediment fall velocity or the Froude number. For given roughness parameterization, the inversion solution is determined iteratively since the hydraulic roughness depends on the unknown depth. We first test the new hydraulic roughness parameterization using estimates of the Manning roughness in sand bed rivers based on field measurements. The coupled inversion and roughness model is then tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID.
NASA Astrophysics Data System (ADS)
Bormann, K.; Painter, T. H.; Marks, D. G.; Kirchner, P. B.; Winstral, A. H.; Ramirez, P.; Goodale, C. E.; Richardson, M.; Berisford, D. F.
2014-12-01
In the western US, snowmelt from the mountains contribute the vast majority of fresh water supply, in an otherwise dry region. With much of California currently experiencing extreme drought, it is critical for water managers to have accurate basin-wide estimations of snow water content during the spring melt season. At the forefront of basin-scale snow monitoring is the Jet Propulsion Laboratory's Airborne Snow Observatory (ASO). With combined LiDAR /spectrometer instruments and weekly flights over key basins throughout California, the ASO suite is capable of retrieving high-resolution basin-wide snow depth and albedo observations. To make best use of these high-resolution snow depths, spatially distributed snow density data are required to leverage snow water equivalent (SWE) from the measured depths. Snow density is a spatially and temporally variable property and is difficult to estimate at basin scales. Currently, ASO uses a physically based snow model (iSnobal) to resolve distributed snow density dynamics across the basin. However, there are issues with the density algorithms in iSnobal, particularly with snow depths below 0.50 m. This shortcoming limited the use of snow density fields from iSnobal during the poor snowfall year of 2014 in the Sierra Nevada, where snow depths were generally low. A deeper understanding of iSnobal model performance and uncertainty for snow density estimation is required. In this study, the model is compared to an existing climate-based statistical method for basin-wide snow density estimation in the Tuolumne basin in the Sierra Nevada and sparse field density measurements. The objective of this study is to improve the water resource information provided to water managers during ASO operation in the future by reducing the uncertainty introduced during the snow depth to SWE conversion.
NASA Astrophysics Data System (ADS)
Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.
2014-06-01
Repeated Light Detection and Ranging (LiDAR) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 LiDAR-derived dataset of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically-based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the coterminous United States. Independent validation data is scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation dataset with substantial geographic coverage. Within twelve distinctive 500 m × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 LiDAR acquisitions. This supplied a dataset for constraining the uncertainty of upscaled LiDAR estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled LiDAR snow depths were then compared to the SNODAS-estimates over the entire study area for the dates of the LiDAR flights. The remotely-sensed snow depths provided a more spatially continuous comparison dataset and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between LiDAR observations and SNODAS estimates were most drastic, suggesting natural processes specific to these regions as causal influences on model uncertainty.
NASA Astrophysics Data System (ADS)
Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.
2015-01-01
Repeated light detection and ranging (lidar) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 lidar-derived data set of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the conterminous United States. Independent validation data are scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation data set with substantial geographic coverage. Within 12 distinctive 500 × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 lidar acquisitions. This supplied a data set for constraining the uncertainty of upscaled lidar estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled lidar snow depths were then compared to the SNODAS estimates over the entire study area for the dates of the lidar flights. The remotely sensed snow depths provided a more spatially continuous comparison data set and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between lidar observations and SNODAS estimates were most drastic, providing insight into the causal influences of natural processes on model uncertainty.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
Sedimentary basins reconnaissance using the magnetic Tilt-Depth method
Salem, A.; Williams, S.; Samson, E.; Fairhead, D.; Ravat, D.; Blakely, R.J.
2010-01-01
We compute the depth to the top of magnetic basement using the Tilt-Depth method from the best available magnetic anomaly grids covering the continental USA and Australia. For the USA, the Tilt-Depth estimates were compared with sediment thicknesses based on drilling data and show a correlation of 0.86 between the datasets. If random data were used then the correlation value goes to virtually zero. There is little to no lateral offset of the depth of basinal features although there is a tendency for the Tilt-Depth results to be slightly shallower than the drill depths. We also applied the Tilt-Depth method to a local-scale, relatively high-resolution aeromagnetic survey over the Olympic Peninsula of Washington State. The Tilt-Depth method successfully identified a variety of important tectonic elements known from geological mapping. Of particular interest, the Tilt-Depth method illuminated deep (3km) contacts within the non-magnetic sedimentary core of the Olympic Mountains, where magnetic anomalies are subdued and low in amplitude. For Australia, the Tilt-Depth estimates also give a good correlation with known areas of shallow basement and sedimentary basins. Our estimates of basement depth are not restricted to regional analysis but work equally well at the micro scale (basin scale) with depth estimates agreeing well with drill hole and seismic data. We focus on the eastern Officer Basin as an example of basin scale studies and find a good level of agreement between previously-derived basin models. However, our study potentially reveals depocentres not previously mapped due to the sparse distribution of well data. This example thus shows the potential additional advantage of the method in geological interpretation. The success of this study suggests that the Tilt-Depth method is useful in estimating the depth to crystalline basement when appropriate quality aeromagnetic anomaly data are used (i.e. line spacing on the order of or less than the expected depth to basement). The method is especially valuable as a reconnaissance tool in regions where drillhole or seismic information are either scarce, lacking, or ambiguous.
May, Stefan
2018-01-01
This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients. PMID:29695098
Pfitzner, Christian; May, Stefan; Nüchter, Andreas
2018-04-24
This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients.
Using computational modeling of river flow with remotely sensed data to infer channel bathymetry
Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.
2012-01-01
As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.
NASA Astrophysics Data System (ADS)
Durand, Michael; Andreadis, Konstantinos M.; Alsdorf, Douglas E.; Lettenmaier, Dennis P.; Moller, Delwyn; Wilson, Matthew
2008-10-01
The proposed Surface Water and Ocean Topography (SWOT) mission would provide measurements of water surface elevation (WSE) for characterization of storage change and discharge. River channel bathymetry is a significant source of uncertainty in estimating discharge from WSE measurements, however. In this paper, we demonstrate an ensemble-based data assimilation (DA) methodology for estimating bathymetric depth and slope from WSE measurements and the LISFLOOD-FP hydrodynamic model. We performed two proof-of-concept experiments using synthetically generated SWOT measurements. The experiments demonstrated that bathymetric depth and slope can be estimated to within 3.0 microradians or 50 cm, respectively, using SWOT WSE measurements, within the context of our DA and modeling framework. We found that channel bathymetry estimation accuracy is relatively insensitive to SWOT measurement error, because uncertainty in LISFLOOD-FP inputs (such as channel roughness and upstream boundary conditions) is likely to be of greater magnitude than measurement error.
Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.
Li, Jielin; Hassebrook, Laurence G; Guan, Chun
2003-01-01
Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.
Gestalt grouping via closure degrades suprathreshold depth percepts.
Deas, Lesley M; Wilcox, Laurie M
2014-08-19
It is well known that the perception of depth is susceptible to changes in configuration. For example, stereoscopic precision for a pair of vertical lines can be dramatically reduced when these lines are connected to form a closed object. Here, we extend this paradigm to suprathreshold estimates of perceived depth. Using a touch-sensor, observers made quantitative estimates of depth between a vertical line pair presented in isolation or as edges of a closed rectangular object with different figural interpretations. First, we show that the amount of depth estimated within a closed rectangular object is consistently reduced relative to the vertical edges presented in isolation or when they form the edges of two segmented objects. We then demonstrate that the reduction in perceived depth for closed objects is modulated by manipulations that influence perceived closure of the central figure. Depth percepts were most disrupted when the horizontal connectors and vertical lines matched in color. Perceived depth increased slightly when the connectors had opposite contrast polarity, but increased dramatically when flankers were added. Thus, as grouping cues were added to counter the interpretation of a closed object, the depth degradation effect was systematically eliminated. The configurations tested here rule out explanations based on early, local interactions such as inhibition or cue conflict; instead, our results provide strong evidence of the impact of Gestalt grouping, via closure, on depth magnitude percepts from stereopsis. © 2014 ARVO.
Estimating Snow Water Storage in North America Using CLM4, DART, and Snow Radiance Data Assimilation
NASA Technical Reports Server (NTRS)
Kwon, Yonghwan; Yang, Zong-Liang; Zhao, Long; Hoar, Timothy J.; Toure, Ally M.; Rodell, Matthew
2016-01-01
This paper addresses continental-scale snow estimates in North America using a recently developed snow radiance assimilation (RA) system. A series of RA experiments with the ensemble adjustment Kalman filter are conducted by assimilating the Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) brightness temperature T(sub B) at 18.7- and 36.5-GHz vertical polarization channels. The overall RA performance in estimating snow depth for North America is improved by simultaneously updating the Community Land Model, version 4 (CLM4), snow/soil states and radiative transfer model (RTM) parameters involved in predicting T(sub B) based on their correlations with the prior T(sub B) (i.e., rule-based RA), although degradations are also observed. The RA system exhibits a more mixed performance for snow cover fraction estimates. Compared to the open-loop run (0.171m RMSE), the overall snow depth estimates are improved by 1.6% (0.168m RMSE) in the rule-based RA whereas the default RA (without a rule) results in a degradation of 3.6% (0.177mRMSE). Significant improvement of the snow depth estimates in the rule-based RA as observed for tundra snow class (11.5%, p < 0.05) and bare soil land-cover type (13.5%, p < 0.05). However, the overall improvement is not significant (p = 0.135) because snow estimates are degraded or marginally improved for other snow classes and land covers, especially the taiga snow class and forest land cover (7.1% and 7.3% degradations, respectively). The current RA system needs to be further refined to enhance snow estimates for various snow types and forested regions.
Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)
Legleiter, Carl
2016-01-01
Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.
Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang
2013-09-13
Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks.
Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang
2013-01-01
Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks. PMID:24064602
NASA Astrophysics Data System (ADS)
Cael, B. B.
How much water do lakes on Earth hold? Global lake volume estimates are scarce, highly variable, and poorly documented. We develop a mechanistic null model for estimating global lake mean depth and volume based on a statistical topographic approach to Earth's surface. The volume-area scaling prediction is accurate and consistent within and across lake datasets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3) . This volume is in the range of historical estimates (166,000-280,000 km3) , but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62 - 151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles. We also evaluate the size (area) distribution of lakes on Earth compared to expectations from percolation theory. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2388357.
NASA Astrophysics Data System (ADS)
Butler, S. L.
2017-12-01
The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.
Wang, Yearnchee Curtis; Chan, Terence Chee-Hung; Sahakian, Alan Varteres
2018-01-04
Radiofrequency ablation (RFA), a method of inducing thermal ablation (cell death), is often used to destroy tumours or potentially cancerous tissue. Current techniques for RFA estimation (electrical impedance tomography, Nakagami ultrasound, etc.) require long compute times (≥ 2 s) and measurement devices other than the RFA device. This study aims to determine if a neural network (NN) can estimate ablation lesion depth for control of bipolar RFA using complex electrical impedance - since tissue electrical conductivity varies as a function of tissue temperature - in real time using only the RFA therapy device's electrodes. Three-dimensional, cubic models comprised of beef liver, pork loin or pork belly represented target tissue. Temperature and complex electrical impedance from 72 data generation ablations in pork loin and belly were used for training the NN (403 s on Xeon processor). NN inputs were inquiry depth, starting complex impedance and current complex impedance. Training-validation-test splits were 70%-0%-30% and 80%-10%-10% (overfit test). Once the NN-estimated lesion depth for a margin reached the target lesion depth, RFA was stopped for that margin of tissue. The NN trained to 93% accuracy and an NN-integrated control ablated tissue to within 1.0 mm of the target lesion depth on average. Full 15-mm depth maps were calculated in 0.2 s on a single-core ARMv7 processor. The results show that a NN could make lesion depth estimations in real-time using less in situ devices than current techniques. With the NN-based technique, physicians could deliver quicker and more precise ablation therapy.
NASA Astrophysics Data System (ADS)
Chakravarthi, V.; Sastry, S. Rajeswara; Ramamma, B.
2013-07-01
Based on the principles of modeling and inversion, two interpretation methods are developed in the space domain along with a GUI based JAVA code, MODTOHAFSD, to analyze the gravity anomalies of strike limited sedimentary basins using a prescribed exponential density contrast-depth function. A stack of vertical prisms all having equal widths, but each one possesses its own limited strike length and thickness, describes the structure of a sedimentary basin above the basement complex. The thicknesses of prisms represent the depths to the basement and are the unknown parameters to be estimated from the observed gravity anomalies. Forward modeling is realized in the space domain using a combination of analytical and numerical approaches. The algorithm estimates the initial depths of a sedimentary basin and improves them, iteratively, based on the differences between the observed and modeled gravity anomalies within the specified convergence criteria. The present code, works on Model-View-Controller (MVC) pattern, reads the Bouguer gravity anomalies, constructs/modifies regional gravity background in an interactive approach, estimates residual gravity anomalies and performs automatic modeling or inversion based on user specification for basement topography. Besides generating output in both ASCII and graphical forms, the code displays (i) the changes in the depth structure, (ii) nature of fit between the observed and modeled gravity anomalies, (iii) changes in misfit, and (iv) variation of density contrast with iteration in animated forms. The code is used to analyze both synthetic and real field gravity anomalies. The proposed technique yielded information that is consistent with the assumed parameters in case of synthetic structure and with available drilling depths in case of field example. The advantage of the code is that it can be used to analyze the gravity anomalies of sedimentary basins even when the profile along which the interpretation is intended fails to bisect the strike length.
Impact of Planetary Boundary Layer Depth on Climatological Tracer Transport in the GEOS-5 AGCM
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2013-12-01
Planetary boundary layer (PBL) processes have large implications for tropospheric tracer transport since surface fluxes are diluted by the depth of the PBL through vertical mixing. However, no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to diagnose PBL depth and produce climatologies that are evaluated here. All seven methods evaluate a single atmosphere so differences are related solely to the definition chosen. PBL depths that are estimated using a Richardson number are shallower than those given by methods based on the scalar diffusivity during warm, moist conditions at midday and collapse to lower values at night. In GEOS-5, the PBL depth is used in the estimation of the turbulent length scale and so impacts vertical mixing. Changing the method used to determine the PBL depth for this length scale thus changes the tracer transport. Using a bulk Richardson number method instead of a scalar diffusivity method produces changes in the quantity of Saharan dust lofted into the free troposphere and advected to North America, with more surface dust in North America during boreal summer and less in boreal winter. Additionally, greenhouse gases are considerably impacted. During boreal winter, changing the PBL depth definition produces carbon dioxide differences of nearly 5 ppm over Siberia and gradients of about 5 ppm over 1000 km in Europe. PBL depth changes are responsible for surface carbon monoxide changes of 20 ppb or more over the biomass burning regions of Africa.
Improved depth estimation with the light field camera
NASA Astrophysics Data System (ADS)
Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display
Depth to Curie temperature across the central Red Sea from magnetic data using the de-fractal method
NASA Astrophysics Data System (ADS)
Salem, Ahmed; Green, Chris; Ravat, Dhananjay; Singh, Kumar Hemant; East, Paul; Fairhead, J. Derek; Mogren, Saad; Biegert, Ed
2014-06-01
The central Red Sea rift is considered to be an embryonic ocean. It is characterised by high heat flow, with more than 90% of the heat flow measurements exceeding the world mean and high values extending to the coasts - providing good prospects for geothermal energy resources. In this study, we aim to map the depth to the Curie isotherm (580 °C) in the central Red Sea based on magnetic data. A modified spectral analysis technique, the “de-fractal spectral depth method” is developed and used to estimate the top and bottom boundaries of the magnetised layer. We use a mathematical relationship between the observed power spectrum due to fractal magnetisation and an equivalent random magnetisation power spectrum. The de-fractal approach removes the effect of fractal magnetisation from the observed power spectrum and estimates the parameters of depth to top and depth to bottom of the magnetised layer using iterative forward modelling of the power spectrum. We applied the de-fractal approach to 12 windows of magnetic data along a profile across the central Red Sea from onshore Sudan to onshore Saudi Arabia. The results indicate variable magnetic bottom depths ranging from 8.4 km in the rift axis to about 18.9 km in the marginal areas. Comparison of these depths with published Moho depths, based on seismic refraction constrained 3D inversion of gravity data, showed that the magnetic bottom in the rift area corresponds closely to the Moho, whereas in the margins it is considerably shallower than the Moho. Forward modelling of heat flow data suggests that depth to the Curie isotherm in the centre of the rift is also close to the Moho depth. Thus Curie isotherm depths estimated from magnetic data may well be imaging the depth to the Curie temperature along the whole profile. Geotherms constrained by the interpreted Curie isotherm depths have subsequently been calculated at three points across the rift - indicating the variation in the likely temperature profile with depth.
On-line depth measurement for laser-drilled holes based on the intensity of plasma emission
NASA Astrophysics Data System (ADS)
Ho, Chao-Ching; Chiu, Chih-Mu; Chang, Yuan-Jen; Hsu, Jin-Chen; Kuo, Chia-Lung
2014-09-01
The direct time-resolved depth measurement of blind holes is extremely difficult due to the short time interval and the limited space inside the hole. This work presents a method that involves on-line plasma emission acquisition and analysis to obtain correlations between the machining processes and the optical signal output. Given that the depths of laser-machined holes can be estimated on-line using a coaxial photodiode, this was employed in our inspection system. Our experiments were conducted in air under normal atmospheric conditions without gas assist. The intensity of radiation emitted from the vaporized material was found to correlate with the depth of the hole. The results indicate that the estimated depths of the laser-drilled holes were inversely proportional to the maximum plasma light emission measured for a given laser pulse number.
NASA Astrophysics Data System (ADS)
Simeonov, J.; Holland, K. T.
2015-12-01
We developed an inversion model for river bathymetry and discharge estimation based on measurements of surface currents, water surface elevation and shoreline coordinates. The model uses a simplification of the 2D depth-averaged steady shallow water equations based on a streamline following system of coordinates and assumes spatially uniform bed friction coefficient and eddy viscosity. The spatial resolution of the predicted bathymetry is related to the resolution of the surface currents measurements. The discharge is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. The inversion model was tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID. The measurements were obtained in August 2010 when the discharge was about 223 m3/s and the maximum river depth was about 6.5 m. Surface currents covering a 10 km reach with 8 m spatial resolution were estimated from airborne infrared video and were converted to depth-averaged currents using acoustic Doppler current profiler (ADCP) measurements along eight cross-stream transects. The streamwise profile of the water surface elevation was measured using real-time kinematic GPS from a drifting platform. The value of the friction coefficient was obtained from forward calibration simulations that minimized the difference between the predicted and measured velocity and water level along the river thalweg. The predicted along/cross-channel water depth variation was compared to the depth measured with a multibeam echo sounder. The rms error between the measured and predicted depth along the thalweg was found to be about 60cm and the estimated discharge was 5% smaller than the discharge measured by the ADCP.
High Spatio-Temporal Resolution Bathymetry Estimation and Morphology
NASA Astrophysics Data System (ADS)
Bergsma, E. W. J.; Conley, D. C.; Davidson, M. A.; O'Hare, T. J.
2015-12-01
In recent years, bathymetry estimates using video images have become increasingly accurate. With the cBathy code (Holman et al., 2013) fully operational, bathymetry results with 0.5 metres accuracy have been regularly obtained at Duck, USA. cBathy is based on observations of the dominant frequencies and wavelengths of surface wave motions and estimates the depth (and hence allows inference of bathymetry profiles) based on linear wave theory. Despite the good performance at Duck, large discrepancies were found related to tidal elevation and camera height (Bergsma et al., 2014) and on the camera boundaries. A tide dependent floating pixel and camera boundary solution have been proposed to overcome these issues (Bergsma et al., under review). The video-data collection is set estimate depths hourly on a grid with resolution in the order of 10x25 meters. Here, the application of the cBathy at Porthtowan in the South-West of England is presented. Hourly depth estimates are combined and analysed over a period of 1.5 years (2013-2014). In this work the focus is on the sub-tidal region, where the best cBathy results are achieved. The morphology of the sub-tidal bar is tracked with high spatio-temporal resolution on short and longer time scales. Furthermore, the impact of the storm and reset (sudden and large changes in bathymetry) of the sub-tidal area is clearly captured with the depth estimations. This application shows that the high spatio-temporal resolution of cBathy makes it a powerful tool for coastal research and coastal zone management.
Method for rapid estimation of scour at highway bridges based on limited site data
Holnbeck, S.R.; Parrett, Charles
1997-01-01
Limited site data were used to develop a method for rapid estimation of scour at highway bridges. The estimates can be obtained in a matter of hours rather than several days as required by more-detailed methods. Such a method is important because scour assessments are needed to identify scour-critical bridges throughout the United States. Using detailed scour-analysis methods and scour-prediction equations recommended by the Federal Highway Administration, the U.S. Geological Survey, in cooperation with the Montana Department of Transportation, obtained contraction, pier, and abutment scour-depth data for sites from 10 States.The data were used to develop relations between scour depth and hydraulic variables that can be rapidly measured in the field. Relations between scour depth and hydraulic variables, in the form of envelope curves, were based on simpler forms of detailed scour-prediction equations. To apply the rapid-estimation method, a 100-year recurrence interval peak discharge is determined, and bridge- length data are used in the field with graphs relating unit discharge to velocity and velocity to bridge backwater as a basis for estimating flow depths and other hydraulic variables that can then be applied using the envelope curves. The method was tested in the field. Results showed good agreement among individuals involved and with results from more-detailed methods. Although useful for identifying potentially scour-critical bridges, themethod does not replace more-detailed methods used for design purposes. Use of the rapid- estimation method should be limited to individuals having experience in bridge scour, hydraulics, and flood hydrology, and some training in use of the method.
Underwater image enhancement through depth estimation based on random forest
NASA Astrophysics Data System (ADS)
Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han
2017-11-01
Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.
Methods and Systems for Characterization of an Anomaly Using Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M. (Inventor)
2013-01-01
A method for characterizing an anomaly in a material comprises (a) extracting contrast data; (b) measuring a contrast evolution; (c) filtering the contrast evolution; (d) measuring a peak amplitude of the contrast evolution; (d) determining a diameter and a depth of the anomaly, and (e) repeating the step of determining the diameter and the depth of the anomaly until a change in the estimate of the depth is less than a set value. The step of determining the diameter and the depth of the anomaly comprises estimating the depth using a diameter constant C.sub.D equal to one for the first iteration of determining the diameter and the depth; estimating the diameter; and comparing the estimate of the depth of the anomaly after each iteration of estimating to the prior estimate of the depth to calculate the change in the estimate of the depth of the anomaly.
Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework
NASA Astrophysics Data System (ADS)
Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy
2014-09-01
We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).
Sampling strategies to improve passive optical remote sensing of river bathymetry
Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.
2018-01-01
Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.
NASA Astrophysics Data System (ADS)
Graymer, R. W.; Simpson, R. W.
2014-12-01
Graymer and Simpson (2013, AGU Fall Meeting) showed that in a simple 2D multi-fault system (vertical, parallel, strike-slip faults bounding blocks without strong material property contrasts) slip rate on block-bounding faults can be reasonably estimated by the difference between the mean velocity of adjacent blocks if the ratio of the effective locking depth to the distance between the faults is 1/3 or less ("effective" locking depth is a synthetic parameter taking into account actual locking depth, fault creep, and material properties of the fault zone). To check the validity of that observation for a more complex 3D fault system and a realistic distribution of observation stations, we developed a synthetic suite of GPS velocities from a dislocation model, with station location and fault parameters based on the San Francisco Bay region. Initial results show that if the effective locking depth is set at the base of the seismogenic zone (about 12-15 km), about 1/2 the interfault distance, the resulting synthetic velocity observations, when clustered, do a poor job of returning the input fault slip rates. However, if the apparent locking depth is set at 1/2 the distance to the base of the seismogenic zone, or about 1/4 the interfault distance, the synthetic velocity field does a good job of returning the input slip rates except where the fault is in a strong restraining orientation relative to block motion or where block velocity is not well defined (for example west of the northern San Andreas Fault where there are no observations to the west in the ocean). The question remains as to where in the real world a low effective locking depth could usefully model fault behavior. Further tests are planned to define the conditions where average cluster-defined block velocities can be used to reliably estimate slip rates on block-bounding faults. These rates are an important ingredient in earthquake hazard estimation, and another tool to provide them should be useful.
Efficient dense blur map estimation for automatic 2D-to-3D conversion
NASA Astrophysics Data System (ADS)
Vosters, L. P. J.; de Haan, G.
2012-03-01
Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.
NASA Astrophysics Data System (ADS)
Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun
2012-10-01
Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.
Bolduc, F.; Afton, A.D.
2008-01-01
Wetland use by waterbirds is highly dependent on water depth, and depth requirements generally vary among species. Furthermore, water depth within wetlands often varies greatly over time due to unpredictable hydrological events, making comparisons of waterbird abundance among wetlands difficult as effects of habitat variables and water depth are confounded. Species-specific relationships between bird abundance and water depth necessarily are non-linear; thus, we developed a methodology to correct waterbird abundance for variation in water depth, based on the non-parametric regression of these two variables. Accordingly, we used the difference between observed and predicted abundances from non-parametric regression (analogous to parametric residuals) as an estimate of bird abundance at equivalent water depths. We scaled this difference to levels of observed and predicted abundances using the formula: ((observed - predicted abundance)/(observed + predicted abundance)) ?? 100. This estimate also corresponds to the observed:predicted abundance ratio, which allows easy interpretation of results. We illustrated this methodology using two hypothetical species that differed in water depth and wetland preferences. Comparisons of wetlands, using both observed and relative corrected abundances, indicated that relative corrected abundance adequately separates the effect of water depth from the effect of wetlands. ?? 2008 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Gierens, Rosa T.; Henriksson, Svante; Josipovic, Micky; Vakkari, Ville; van Zyl, Pieter G.; Beukes, Johan P.; Wood, Curtis R.; O'Connor, Ewan J.
2018-05-01
The atmospheric boundary layer (BL) is the atmospheric layer coupled to the Earth's surface at relatively short timescales. A key quantity is the BL depth, which is important in many applied areas of weather and climate such as air-quality forecasting. Studying BLs in climates and biomes across the globe is important, particularly in the under-sampled southern hemisphere. The present study is based on a grazed grassland-savannah area in northwestern South Africa during October 2012-August 2014. Ceilometers are probably the cheapest method for measuring continuous aerosol profiles up to several kilometers above ground and are thus an ideal tool for long-term studies of BLs. A ceilometer-estimated BL depth is based on profiles of attenuated backscattering coefficients from atmospheric aerosols; the sharpest drop often occurs at BL top. Based on this, we developed a new method for layer detection that we call the signal-limited layer method. The new algorithm was applied to ceilometer profiles which thus classified BL into classic regime types: daytime convective mixing, and a double layer at night of surface-based stable with a residual layer above it. We employed wavelet fitting to increase successful BL estimation for noisy profiles. The layer-detection algorithm was supported by an eddy-flux station, rain gauges, and manual inspection. Diurnal cycles were often clear, with BL depth detected for 50% of the daytime typically being 1-3 km, and for 80% of the night-time typically being a few hundred meters. Variability was also analyzed with respect to seasons and years. Finally, BL depths were compared with ERA-Interim estimates of BL depth to show reassuring agreement.
Model based estimation of image depth and displacement
NASA Technical Reports Server (NTRS)
Damour, Kevin T.
1992-01-01
Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.
A comparison of hydrographically and optically derived mixed layer depths
Zawada, D.G.; Zaneveld, J.R.V.; Boss, E.; Gardner, W.D.; Richardson, M.J.; Mishonov, A.V.
2005-01-01
Efforts to understand and model the dynamics of the upper ocean would be significantly advanced given the ability to rapidly determine mixed layer depths (MLDs) over large regions. Remote sensing technologies are an ideal choice for achieving this goal. This study addresses the feasibility of estimating MLDs from optical properties. These properties are strongly influenced by suspended particle concentrations, which generally reach a maximum at pycnoclines. The premise therefore is to use a gradient in beam attenuation at 660 nm (c660) as a proxy for the depth of a particle-scattering layer. Using a global data set collected during World Ocean Circulation Experiment cruises from 1988-1997, six algorithms were employed to compute MLDs from either density or temperature profiles. Given the absence of published optically based MLD algorithms, two new methods were developed that use c660 profiles to estimate the MLD. Intercomparison of the six hydrographically based algorithms revealed some significant disparities among the resulting MLD values. Comparisons between the hydrographical and optical approaches indicated a first-order agreement between the MLDs based on the depths of gradient maxima for density and c660. When comparing various hydrographically based algorithms, other investigators reported that inherent fluctuations of the mixed layer depth limit the accuracy of its determination to 20 m. Using this benchmark, we found a ???70% agreement between the best hydrographical-optical algorithm pairings. Copyright 2005 by the American Geophysical Union.
Landform partitioning and estimates of deep storage of soil organic matter in Zackenberg, Greenland
NASA Astrophysics Data System (ADS)
Palmtag, Juri; Cable, Stefanie; Christiansen, Hanne H.; Hugelius, Gustaf; Kuhry, Peter
2018-05-01
Soils in the northern high latitudes are a key component in the global carbon cycle, with potential feedback on climate. This study aims to improve the previous soil organic carbon (SOC) and total nitrogen (TN) storage estimates for the Zackenberg area (NE Greenland) that were based on a land cover classification (LCC) approach, by using geomorphological upscaling. In addition, novel organic carbon (OC) estimates for deeper alluvial and deltaic deposits (down to 300 cm depth) are presented. We hypothesise that landforms will better represent the long-term slope and depositional processes that result in deep SOC burial in this type of mountain permafrost environments. The updated mean SOC storage for the 0-100 cm soil depth is 4.8 kg C m-2, which is 42 % lower than the previous estimate of 8.3 kg C m-2 based on land cover upscaling. Similarly, the mean soil TN storage in the 0-100 cm depth decreased with 44 % from 0.50 kg (± 0.1 CI) to 0.28 (±0.1 CI) kg TN m-2. We ascribe the differences to a previous areal overestimate of SOC- and TN-rich vegetated land cover classes. The landform-based approach more correctly constrains the depositional areas in alluvial fans and deltas with high SOC and TN storage. These are also areas of deep carbon storage with an additional 2.4 kg C m-2 in the 100-300 cm depth interval. This research emphasises the need to consider geomorphology when assessing SOC pools in mountain permafrost landscapes.
NASA Technical Reports Server (NTRS)
Moussavi, Mahsa S.; Abdalati, Waleed; Pope, Allen; Scambos, Ted; Tedesco, Marco; MacFerrin, Michael; Grigsby, Shane
2016-01-01
Supraglacial meltwater lakes on the western Greenland Ice Sheet (GrIS) are critical components of its surface hydrology and surface mass balance, and they also affect its ice dynamics. Estimates of lake volume, however, are limited by the availability of in situ measurements of water depth,which in turn also limits the assessment of remotely sensed lake depths. Given the logistical difficulty of collecting physical bathymetric measurements, methods relying upon in situ data are generally restricted to small areas and thus their application to largescale studies is difficult to validate. Here, we produce and validate spaceborne estimates of supraglacial lake volumes across a relatively large area (1250 km(exp 2) of west Greenland's ablation region using data acquired by the WorldView-2 (WV-2) sensor, making use of both its stereo-imaging capability and its meter-scale resolution. We employ spectrally-derived depth retrieval models, which are either based on absolute reflectance (single-channel model) or a ratio of spectral reflectances in two bands (dual-channel model). These models are calibrated by usingWV-2multispectral imagery acquired early in the melt season and depth measurements from a high resolutionWV-2 DEM over the same lake basins when devoid of water. The calibrated models are then validated with different lakes in the area, for which we determined depths. Lake depth estimates based on measurements recorded in WV-2's blue (450-510 nm), green (510-580 nm), and red (630-690 nm) bands and dual-channel modes (blue/green, blue/red, and green/red band combinations) had near-zero bias, an average root-mean-squared deviation of 0.4 m (relative to post-drainage DEMs), and an average volumetric error of b1%. The approach outlined in this study - image-based calibration of depth-retrieval models - significantly improves spaceborne supraglacial bathymetry retrievals, which are completely independent from in situ measurements.
Estimating the Probability of Elevated Nitrate Concentrations in Ground Water in Washington State
Frans, Lonna M.
2008-01-01
Logistic regression was used to relate anthropogenic (manmade) and natural variables to the occurrence of elevated nitrate concentrations in ground water in Washington State. Variables that were analyzed included well depth, ground-water recharge rate, precipitation, population density, fertilizer application amounts, soil characteristics, hydrogeomorphic regions, and land-use types. Two models were developed: one with and one without the hydrogeomorphic regions variable. The variables in both models that best explained the occurrence of elevated nitrate concentrations (defined as concentrations of nitrite plus nitrate as nitrogen greater than 2 milligrams per liter) were the percentage of agricultural land use in a 4-kilometer radius of a well, population density, precipitation, soil drainage class, and well depth. Based on the relations between these variables and measured nitrate concentrations, logistic regression models were developed to estimate the probability of nitrate concentrations in ground water exceeding 2 milligrams per liter. Maps of Washington State were produced that illustrate these estimated probabilities for wells drilled to 145 feet below land surface (median well depth) and the estimated depth to which wells would need to be drilled to have a 90-percent probability of drawing water with a nitrate concentration less than 2 milligrams per liter. Maps showing the estimated probability of elevated nitrate concentrations indicated that the agricultural regions are most at risk followed by urban areas. The estimated depths to which wells would need to be drilled to have a 90-percent probability of obtaining water with nitrate concentrations less than 2 milligrams per liter exceeded 1,000 feet in the agricultural regions; whereas, wells in urban areas generally would need to be drilled to depths in excess of 400 feet.
NASA Technical Reports Server (NTRS)
Markus, Thorsten; Maksym, Ted
2007-01-01
Passive microwave snow depth, ice concentration, and ice motion estimates are combined with snowfall from the European Centre for Medium Range Weather Forecasting (ECMWF) reanalysis (ERA-40) from 1979-200 1 to estimate the prevalence of snow-to-ice conversion (snow-ice formation) on level sea ice in the Antarctic for April-October. Snow ice is ubiquitous in all regions throughout the growth season. Calculated snow- ice thicknesses fall within the range of estimates from ice core analysis for most regions. However, uncertainties in both this analysis and in situ data limit the usefulness of snow depth and snow-ice production to evaluate the accuracy of ERA-40 snowfall. The East Antarctic is an exception, where calculated snow-ice production exceeds observed ice thickness over wide areas, suggesting that ERA-40 precipitation is too high there. Snow-ice thickness variability is strongly controlled not just by snow accumulation rates, but also by ice divergence. Surprisingly, snow-ice production is largely independent of snow depth, indicating that the latter may be a poor indicator of total snow accumulation. Using the presence of snow-ice formation as a proxy indicator for near-zero freeboard, we examine the possibility of estimating level ice thickness from satellite snow depths. A best estimate for the mean level ice thickness in September is 53 cm, comparing well with 51 cm from ship-based observations. The error is estimated to be 10-20 cm, which is similar to the observed interannual and regional variability. Nevertheless, this is comparable to expected errors for ice thickness determined by satellite altimeters. Improvement in satellite snow depth retrievals would benefit both of these methods.
Holtschlag, David J.
2009-01-01
Two-dimensional hydrodynamic and transport models were applied to a 34-mile reach of the Ohio River from Cincinnati, Ohio, upstream to Meldahl Dam near Neville, Ohio. The hydrodynamic model was based on the generalized finite-element hydrodynamic code RMA2 to simulate depth-averaged velocities and flow depths. The generalized water-quality transport code RMA4 was applied to simulate the transport of vertically mixed, water-soluble constituents that have a density similar to that of water. Boundary conditions for hydrodynamic simulations included water levels at the U.S. Geological Survey water-level gaging station near Cincinnati, Ohio, and flow estimates based on a gate rating at Meldahl Dam. Flows estimated on the basis of the gate rating were adjusted with limited flow-measurement data to more nearly reflect current conditions. An initial calibration of the hydrodynamic model was based on data from acoustic Doppler current profiler surveys and water-level information. These data provided flows, horizontal water velocities, water levels, and flow depths needed to estimate hydrodynamic parameters related to channel resistance to flow and eddy viscosity. Similarly, dye concentration measurements from two dye-injection sites on each side of the river were used to develop initial estimates of transport parameters describing mixing and dye-decay characteristics needed for the transport model. A nonlinear regression-based approach was used to estimate parameters in the hydrodynamic and transport models. Parameters describing channel resistance to flow (Manning’s “n”) were estimated in areas of deep and shallow flows as 0.0234, and 0.0275, respectively. The estimated RMA2 Peclet number, which is used to dynamically compute eddy-viscosity coefficients, was 38.3, which is in the range of 15 to 40 that is typically considered appropriate. Resulting hydrodynamic simulations explained 98.8 percent of the variability in depth-averaged flows, 90.0 percent of the variability in water levels, 93.5 percent of the variability in flow depths, and 92.5 percent of the variability in velocities. Estimates of the water-quality-transport-model parameters describing turbulent mixing characteristics converged to different values for the two dye-injection reaches. For the Big Indian Creek dye-injection study, an RMA4 Peclet number of 37.2 was estimated, which was within the recommended range of 15 to 40, and similar to the RMA2 Peclet number. The estimated dye-decay coefficient was 0.323. Simulated dye concentrations explained 90.2 percent of the variations in measured dye concentrations for the Big Indian Creek injection study. For the dye-injection reach starting downstream from Twelvemile Creek, however, an RMA4 Peclet number of 173 was estimated, which is far outside the recommended range. Simulated dye concentrations were similar to measured concentration distributions at the first four transects downstream from the dye-injection site that were considered vertically mixed. Farther downstream, however, simulated concentrations did not match the attenuation of maximum concentrations or cross-channel transport of dye that were measured. The difficulty of determining a consistent RMA4 Peclet was related to the two-dimension model assumption that velocity distributions are closely approximated by their depth-averaged values. Analysis of velocity data showed significant variations in velocity direction with depth in channel reaches with curvature. Channel irregularities (including curvatures, depth irregularities, and shoreline variations) apparently produce transverse currents that affect the distribution of constituents, but are not fully accounted for in a two-dimensional model. The two-dimensional flow model, using channel resistance to flow parameters of 0.0234 and 0.0275 for deep and shallow areas, respectively, and an RMA2 Peclet number of 38.3, and the RMA4 transport model with a Peclet number of 37.2, may have utility for emergency-planning purposes. Emergency-response efforts would be enhanced by continuous streamgaging records downstream from Meldahl Dam, real-time water-quality monitoring, and three-dimensional modeling. Decay coefficients are constituent specific.
Seismic Source Scaling and Characteristics of Six North Korean Underground Nuclear Explosions
NASA Astrophysics Data System (ADS)
Park, J.; Stump, B. W.; Che, I. Y.; Hayward, C.
2017-12-01
We estimate the range of yields and source depths for the six North Korean underground nuclear explosions in 2006, 2009, 2013, 2016 (January and September), and 2017, based on regional seismic observations in South Korea and China. Seismic data used in this study are from three seismo-acoustic stations, BRDAR, CHNAR, and KSGAR, cooperatively operated by SMU and KIGAM, the KSRS seismic array operated by the Comprehensive Nuclear Test Ban Treaty Organization, and MDJ, a station in the Global Seismographic Network. We calculate spectral ratios for event pairs using seismograms from the six explosions observed along the same paths and at the same receivers. These relative seismic source scaling spectra for Pn, Pg, Sn, and surface wave windows provide a basis for a grid search source solution that estimates source yield and depth for each event based on both the modified Mueller and Murphy (1971; MM71) and Denny and Johnson (1991; DJ91) source models. The grid search is used to identify the best-fit empirical spectral ratios subject to the source models by minimizing the goodness-of-fit (GOF) in the frequency range of 0.5-15 Hz. For all cases, the DJ91 model produces higher ratios of depth and yield than MM71. These initial results include significant trade-offs between depth and yield in all cases. In order to better take the effect of source depth into account, a modified grid search was implemented that includes the propagation effects for different source depths by including reflectivity Greens functions in the grid search procedure. This revision reduces the trade-offs between depth and yield, results in better model fits to frequencies as high as 15 Hz, and GOF values smaller than those where the depth effects on the Greens functions were ignored. The depth and yield estimates for all six explosions using this new procedure will be presented.
NASA Astrophysics Data System (ADS)
Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert
2018-04-01
Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.
The effect of S-wave arrival times on the accuracy of hypocenter estimation
Gomberg, J.S.; Shedlock, K.M.; Roecker, S.W.
1990-01-01
We have examined the theoretical basis behind some of the widely accepted "rules of thumb' for obtaining accurate hypocenter estimates that pertain to the use of S phases and illustrate, in a variety of ways, why and when these "rules' are applicable. Most methods used to determine earthquake hypocenters are based on iterative, linearized, least-squares algorithms. We examine the influence of S-phase arrival time data on such algorithms by using the program HYPOINVERSE with synthetic datasets. We conclude that a correctly timed S phase recorded within about 1.4 focal depth's distance from the epicenter can be a powerful constraint on focal depth. Furthermore, we demonstrate that even a single incorrectly timed S phase can result in depth estimates and associated measures of uncertainty that are significantly incorrect. -from Authors
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan; Ekinci, Yunus Levent; Göktürkler, Gökhan; Turan, Seçil
2017-01-01
3D non-linear inversion of total field magnetic anomalies caused by vertical-sided prismatic bodies has been achieved by differential evolution (DE), which is one of the population-based evolutionary algorithms. We have demonstrated the efficiency of the algorithm on both synthetic and field magnetic anomalies by estimating horizontal distances from the origin in both north and east directions, depths to the top and bottom of the bodies, inclination and declination angles of the magnetization, and intensity of magnetization of the causative bodies. In the synthetic anomaly case, we have considered both noise-free and noisy data sets due to two vertical-sided prismatic bodies in a non-magnetic medium. For the field case, airborne magnetic anomalies originated from intrusive granitoids at the eastern part of the Biga Peninsula (NW Turkey) which is composed of various kinds of sedimentary, metamorphic and igneous rocks, have been inverted and interpreted. Since the granitoids are the outcropped rocks in the field, the estimations for the top depths of two prisms representing the magnetic bodies were excluded during inversion studies. Estimated bottom depths are in good agreement with the ones obtained by a different approach based on 3D modelling of pseudogravity anomalies. Accuracy of the estimated parameters from both cases has been also investigated via probability density functions. Based on the tests in the present study, it can be concluded that DE is a useful tool for the parameter estimation of source bodies using magnetic anomalies.
NASA Astrophysics Data System (ADS)
Redemann, J.; Livingston, J. M.; Shinozuka, Y.; Kacenelenbogen, M. S.; Russell, P. B.; LeBlanc, S. E.; Vaughan, M.; Ferrare, R. A.; Hostetler, C. A.; Rogers, R. R.; Burton, S. P.; Torres, O.; Remer, L. A.; Stier, P.; Schutgens, N.
2014-12-01
We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). For the first time, we present comparisons of our multi-sensor aerosol direct radiative forcing estimates to values derived from a subset of models that participated in the latest AeroCom initiative. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.
Nuclear Test Depth Determination with Synthetic Modelling: Global Analysis from PNEs to DPRK-2016
NASA Astrophysics Data System (ADS)
Rozhkov, Mikhail; Stachnik, Joshua; Baker, Ben; Epiphansky, Alexey; Bobrov, Dmitry
2016-04-01
Seismic event depth determination is critical for the event screening process at the International Data Center, CTBTO. A thorough determination of the event depth can be conducted mostly through additional special analysis because the IDC's Event Definition Criteria is based, in particular, on depth estimation uncertainties. This causes a large number of events in the Reviewed Event Bulletin to have depth constrained to the surface making the depth screening criterion not applicable. Further it may result in a heavier workload to manually distinguish between subsurface and deeper crustal events. Since the shape of the first few seconds of signal of very shallow events is very sensitive to the depth phases, cross correlation between observed and theoretic seismograms can provide a basis for the event depth estimation, and so an expansion to the screening process. We applied this approach mostly to events at teleseismic and partially regional distances. The approach was found efficient for the seismic event screening process, with certain caveats related mostly to poorly defined source and receiver crustal models which can shift the depth estimate. An adjustable teleseismic attenuation model (t*) for synthetics was used since this characteristic is not known for most of the rays we studied. We studied a wide set of historical records of nuclear explosions, including so called Peaceful Nuclear Explosions (PNE) with presumably known depths, and recent DPRK nuclear tests. The teleseismic synthetic approach is based on the stationary phase approximation with hudson96 program, and the regional modelling was done with the generalized ray technique by Vlastislav Cerveny modified to account for the complex source topography. The software prototype is designed to be used for the Expert Technical Analysis at the IDC. With this, the design effectively reuses the NDC-in-a-Box code and can be comfortably utilized by the NDC users. The package uses Geotool as a front-end for data retrieval and pre-processing. After the event database is compiled, the control is passed to the driver software, running the external processing and plotting toolboxes, which controls the final stage and produces the final result. The modules are mostly Python coded, C-coded (Raysynth3D complex topography regional synthetics) and FORTRAN coded synthetics from the CPS330 software package by Robert Herrmann of Saint Louis University. The extension of this single station depth determination method is under development and uses joint information from all stations participating in processing. It is based on simultaneous depth and moment tensor determination for both short and long period seismic phases. A novel approach recently developed for microseismic event location utilizing only phase waveform information was migrated to a global scale. It should provide faster computation as it does not require intensive synthetic modelling, and might benefit processing noisy signals. A consistent depth estimate for all recent nuclear tests was produced for the vast number of IMS stations (primary and auxiliary) used in processing.
Evaluating motion parallax and stereopsis as depth cues for autostereoscopic displays
NASA Astrophysics Data System (ADS)
Braun, Marius; Leiner, Ulrich; Ruschin, Detlef
2011-03-01
The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular. Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time, eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation by another 30-40%.
Detecting overpressure using the Eaton and Equivalent Depth methods in Offshore Nova Scotia, Canada
NASA Astrophysics Data System (ADS)
Ernanda; Primasty, A. Q. T.; Akbar, K. A.
2018-03-01
Overpressure is an abnormal high subsurface pressure of any fluids which exceeds the hydrostatic pressure of column of water or formation brine. In Offshore Nova Scotia Canada, the values and depth of overpressure zone are determined using the eaton and equivalent depth method, based on well data and the normal compaction trend analysis. Since equivalent depth method is using effective vertical stress principle and Eaton method considers physical property ratio (velocity). In this research, pressure evaluation only applicable on Penobscot L-30 well. An abnormal pressure is detected at depth 11804 feet as possibly overpressure zone, based on pressure gradient curve and calculation between the Eaton method (7241.3 psi) and Equivalent Depth method (6619.4 psi). Shales within Abenaki formation especially Baccaro Member is estimated as possible overpressure zone due to hydrocarbon generation mechanism.
NASA Astrophysics Data System (ADS)
Frisbee, Marty D.; Tolley, Douglas G.; Wilson, John L.
2017-04-01
Estimates of groundwater circulation depths based on field data are lacking. These data are critical to inform and refine hydrogeologic models of mountainous watersheds, and to quantify depth and time dependencies of weathering processes in watersheds. Here we test two competing hypotheses on the role of geology and geologic setting in deep groundwater circulation and the role of deep groundwater in the geochemical evolution of streams and springs. We test these hypotheses in two mountainous watersheds that have distinctly different geologic settings (one crystalline, metamorphic bedrock and the other volcanic bedrock). Estimated circulation depths for springs in both watersheds range from 0.6 to 1.6 km and may be as great as 2.5 km. These estimated groundwater circulation depths are much deeper than commonly modeled depths suggesting that we may be forcing groundwater flow paths too shallow in models. In addition, the spatial relationships of groundwater circulation depths are different between the two watersheds. Groundwater circulation depths in the crystalline bedrock watershed increase with decreasing elevation indicative of topography-driven groundwater flow. This relationship is not present in the volcanic bedrock watershed suggesting that both the source of fracturing (tectonic versus volcanic) and increased primary porosity in the volcanic bedrock play a role in deep groundwater circulation. The results from the crystalline bedrock watershed also indicate that relatively deep groundwater circulation can occur at local scales in headwater drainages less than 9.0 km2 and at larger fractions than commonly perceived. Deep groundwater is a primary control on streamflow processes and solute concentrations in both watersheds.
Hugelius, Gustaf; Strauss, J.; Zubrzycki, S.; ...
2014-12-01
Soils and other unconsolidated deposits in the northern circumpolar permafrost region store large amounts of soil organic carbon (SOC). This SOC is potentially vulnerable to remobilization following soil warming and permafrost thaw, but SOC stock estimates were poorly constrained and quantitative error estimates were lacking. This study presents revised estimates of permafrost SOC stocks, including quantitative uncertainty estimates, in the 0–3 m depth range in soils as well as for sediments deeper than 3 m in deltaic deposits of major rivers and in the Yedoma region of Siberia and Alaska. Revised estimates are based on significantly larger databases compared tomore » previous studies. Despite this there is evidence of significant remaining regional data gaps. Estimates remain particularly poorly constrained for soils in the High Arctic region and physiographic regions with thin sedimentary overburden (mountains, highlands and plateaus) as well as for deposits below 3 m depth in deltas and the Yedoma region. While some components of the revised SOC stocks are similar in magnitude to those previously reported for this region, there are substantial differences in other components, including the fraction of perennially frozen SOC. Upscaled based on regional soil maps, estimated permafrost region SOC stocks are 217 ± 12 and 472 ± 27 Pg for the 0–0.3 and 0–1 m soil depths, respectively (±95% confidence intervals). Storage of SOC in 0–3 m of soils is estimated to 1035 ± 150 Pg. Of this, 34 ± 16 Pg C is stored in poorly developed soils of the High Arctic. Based on generalized calculations, storage of SOC below 3 m of surface soils in deltaic alluvium of major Arctic rivers is estimated as 91 ± 52 Pg. In the Yedoma region, estimated SOC stocks below 3 m depth are 181 ± 54 Pg, of which 74 ± 20 Pg is stored in intact Yedoma (late Pleistocene ice- and organic-rich silty sediments) with the remainder in refrozen thermokarst deposits. Total estimated SOC storage for the permafrost region is ∼1300 Pg with an uncertainty range of ∼1100 to 1500 Pg. Of this, ∼500 Pg is in non-permafrost soils, seasonally thawed in the active layer or in deeper taliks, while ∼800 Pg is perennially frozen. In conclusion, this represents a substantial ∼300 Pg lowering of the estimated perennially frozen SOC stock compared to previous estimates.« less
NASA Astrophysics Data System (ADS)
Basu, Biswajit
2017-12-01
Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.
Ion penetration depth in the plant cell wall
NASA Astrophysics Data System (ADS)
Yu, L. D.; Vilaithong, T.; Phanchaisri, B.; Apavatjrut, P.; Anuntalabhochai, S.; Evans, P.; Brown, I. G.
2003-05-01
This study investigates the depth of ion penetration in plant cell wall material. Based on the biological structure of the plant cell wall, a physical model is proposed which assumes that the wall is composed of randomly orientated layers of cylindrical microfibrils made from cellulose molecules of C 6H 12O 6. With this model, we have determined numerical factors for ion implantation in the plant cell wall to correct values calculated from conventional ion implantation programs. Using these correction factors, it is possible to apply common ion implantation programs to estimate the ion penetration depth in the cell for bioengineering purposes. These estimates are compared with measured data from experiments and good agreement is achieved.
Airborne radar surveys of snow depth over Antarctic sea ice during Operation IceBridge
NASA Astrophysics Data System (ADS)
Panzer, B.; Gomez-Garcia, D.; Leuschen, C.; Paden, J. D.; Gogineni, P. S.
2012-12-01
Over the last decade, multiple satellite-based laser and radar altimeters, optimized for polar observations, have been launched with one of the major objectives being the determination of global sea ice thickness and distribution [5, 6]. Estimation of sea-ice thickness from these altimeters relies on freeboard measurements and the presence of snow cover on sea ice affects this estimate. Current means of estimating the snow depth rely on daily precipitation products and/or data from passive microwave sensors [2, 7]. Even a small uncertainty in the snow depth leads to a large uncertainty in the sea-ice thickness estimate. To improve the accuracy of the sea-ice thickness estimates and provide validation for measurements from satellite-based sensors, the Center for Remote Sensing of Ice Sheets deploys the Snow Radar as a part of NASA Operation IceBridge. The Snow Radar is an ultra-wideband, frequency-modulated, continuous-wave radar capable of resolving snow depth on sea ice from 5 cm to more than 2 meters from long-range, airborne platforms [4]. This paper will discuss the algorithm used to directly extract snow depth estimates exclusively using the Snow Radar data set by tracking both the air-snow and snow-ice interfaces. Prior work in this regard used data from a laser altimeter for tracking the air-snow interface or worked under the assumption that the return from the snow-ice interface was greater than that from the air-snow interface due to a larger dielectric contrast, which is not true for thick or higher loss snow cover [1, 3]. This paper will also present snow depth estimates from Snow Radar data during the NASA Operation IceBridge 2010-2011 Antarctic campaigns. In 2010, three sea ice flights were flown, two in the Weddell Sea and one in the Amundsen and Bellingshausen Seas. All three flight lines were repeated in 2011, allowing an annual comparison of snow depth. In 2011, a repeat pass of an earlier flight in the Weddell Sea was flown, allowing for a comparison of snow depths with two weeks elapsed between passes. [1] Farrell, S.L., et al., "A First Assessment of IceBridge Snow and Ice Thickness Data Over Arctic Sea Ice," IEEE Tran. Geoscience and Remote Sensing, Vol. 50, No. 6, pp. 2098-2111, June 2012. [2] Kwok, R., and G. F. Cunningham, "ICESat over Arctic sea ice: Estimation of snow depth and ice thickness," J. Geophys. Res., 113, C08010, 2008. [3] Kwok, R., et al., "Airborne surveys of snow depth over Arctic sea ice," J. Geophys. Res., 116, C11018, 2011. [4] Panzer, B., et al., "An ultra-wideband, microwave radar for measuring snow thickness on sea ice and mapping near-surface internal layers in polar firn," Submitted to J. Glaciology, July 23, 2012. [5] Wingham, D.J., et al., "CryoSat: A Mission to Determine the Fluctuations in Earth's Land and Marine Ice Fields," Advances in Space Research, Vol. 37, No. 4, pp. 841-871, 2006. [6] Zwally, H. J., et al., "ICESat's laser measurements of polar ice, atmosphere, ocean, and land," J. Geodynamics, Vol. 34, No. 3-4, pp. 405-445, Oct-Nov 2002. [7] Zwally, H. J., et al., "ICESat measurements of sea ice freeboard and estimates of sea ice thickness in the Weddell Sea," J. Geophys. Res., 113, C02S15, 2008.
Regionalization of precipitation characteristics in Montana using L-moments
Parrett, C.
1998-01-01
Dimensionless precipitation-frequency curves for estimating precipitation depths having small exceedance probabilities were developed for 2-, 6-, and 24-hour storm durations for three homogeneous regions in Montana. L-moment statistics were used to help define the homogeneous regions. The generalized extreme value distribution was used to construct the frequency curves for each duration within each region. The effective record length for each duration in each region was estimated using a graphical method and was found to range from 500 years for 6-hour duration data in Region 2 to 5,100 years for 24-hour duration data in Region 3. The temporal characteristics of storms were analyzed, and methods for estimating synthetic storm hyetographs were developed. Dimensionless depth-duration data were grouped by independent duration (2,6, and 24 hours) and by region, and the beta distribution was fit to dimensionless depth data for various incremental time intervals. Ordinary least-squares regression was used to develop relations between dimensionless depths for a key, short duration - termed the kernel duration - and dimensionless depths for other durations. The regression relations were used, together with the probabilistic dimensionless depth data for the kernel duration, to calculate dimensionless depth-duration curves for exceedance probabilities from .1 to .9. Dimensionless storm hyetographs for each independent duration in each region were constructed for median value conditions based on an exceedance probability of .5.
NASA Astrophysics Data System (ADS)
Simeonov, J.; Holland, K. T.
2016-12-01
We investigated the fidelity of a hierarchy of inverse models that estimate river bathymetry and discharge using measurements of surface currents and water surface elevation. Our most comprehensive depth inversion was based on the Shiono and Knight (1991) model that considers the depth-averaged along-channel momentum balance between the downstream pressure gradient due to gravity, the bottom drag and the lateral stresses induced by turbulence. The discharge was determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. The bottom friction coefficient was assumed to be known or determined by alternative means. We also considered simplifications of the comprehensive inversion model that exclude the lateral mixing term from the momentum balance and assessed the effect of neglecting this term on the depth and discharge estimates for idealized in-bank flow in symmetric trapezoidal channels with width/depth ratio of 40 and different side-wall slopes. For these simple gravity-friction models, we used two different bottom friction parameterizations - a constant Darcy-Weisbach local friction and a depth-dependent friction related to the local depth and a constant Manning (roughness) coefficient. Our results indicated that the Manning gravity-friction model provides accurate estimates of the depth and the discharge that are within 1% of the assumed values for channels with side-wall slopes between 1/2 and 1/17. On the other hand, the constant Darcy-Weisbach friction model underpredicted the true depth and discharge by 7% and 9%, respectively, for the channel with side-wall slope of 1/17. These idealized modeling results suggest that a depth-dependent parameterization of the bottom friction is important for accurate inversion of depth and discharge and that the lateral turbulent mixing is not important. We also tested the comprehensive and the simplified inversion models for the Kootenai River near Bonners Ferry (Idaho) using in situ and remote sensing measurements of surface currents and water surface elevation obtained during a 2010 field experiment.
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
An entropy-based method for determining the flow depth distribution in natural channels
NASA Astrophysics Data System (ADS)
Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.
2013-08-01
A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.
Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target
NASA Astrophysics Data System (ADS)
Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.
2016-06-01
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
Ekinci, Yunus Levent
2016-01-01
This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.
Yule, Daniel L.; Adams, Jean V.; Warner, David M.; Hrabik, Thomas R.; Kocovsky, Patrick M.; Weidel, Brian C.; Rudstam, Lars G.; Sullivan, Patrick J.
2013-01-01
Pelagic fish assessments often combine large amounts of acoustic-based fish density data and limited midwater trawl information to estimate species-specific biomass density. We compared the accuracy of five apportionment methods for estimating pelagic fish biomass density using simulated communities with known fish numbers that mimic Lakes Superior, Michigan, and Ontario, representing a range of fish community complexities. Across all apportionment methods, the error in the estimated biomass generally declined with increasing effort, but methods that accounted for community composition changes with water column depth performed best. Correlations between trawl catch and the true species composition were highest when more fish were caught, highlighting the benefits of targeted trawling in locations of high fish density. Pelagic fish surveys should incorporate geographic and water column depth stratification in the survey design, use apportionment methods that account for species-specific depth differences, target midwater trawling effort in areas of high fish density, and include at least 15 midwater trawls. With relatively basic biological information, simulations of fish communities and sampling programs can optimize effort allocation and reduce error in biomass estimates.
Burns, W. Matthew; Hayba, Daniel O.; Rowan, Elisabeth L.; Houseknecht, David W.
2007-01-01
The reconstruction of burial and thermal histories of partially exhumed basins requires an estimation of the amount of erosion that has occurred since the time of maximum burial. We have developed a method for estimating eroded thickness by using porosity-depth trends derived from borehole sonic logs of wells in the Colville Basin of northern Alaska. Porosity-depth functions defined from sonic-porosity logs in wells drilled in minimally eroded parts of the basin provide a baseline for comparison with the porosity-depth trends observed in other wells across the basin. Calculated porosities, based on porosity-depth functions, were fitted to the observed data in each well by varying the amount of section assumed to have been eroded from the top of the sedimentary column. The result is an estimate of denudation at the wellsite since the time of maximum sediment accumulation. Alternative methods of estimating exhumation include fission-track analysis and projection of trendlines through vitrinite-reflectance profiles. In the Colville Basin, the methodology described here provides results generally similar to those from fission-track analysis and vitrinite-reflectance profiles, but with greatly improved spatial resolution relative to the published fission-track data and with improved reliability relative to the vitrinite-reflectance data. In addition, the exhumation estimates derived from sonic-porosity logs are independent of the thermal evolution of the basin, allowing these estimates to be used as independent variables in thermal-history modeling.
Validation of Pooled Whole-Genome Re-Sequencing in Arabidopsis lyrata.
Fracassetti, Marco; Griffin, Philippa C; Willi, Yvonne
2015-01-01
Sequencing pooled DNA of multiple individuals from a population instead of sequencing individuals separately has become popular due to its cost-effectiveness and simple wet-lab protocol, although some criticism of this approach remains. Here we validated a protocol for pooled whole-genome re-sequencing (Pool-seq) of Arabidopsis lyrata libraries prepared with low amounts of DNA (1.6 ng per individual). The validation was based on comparing single nucleotide polymorphism (SNP) frequencies obtained by pooling with those obtained by individual-based Genotyping By Sequencing (GBS). Furthermore, we investigated the effect of sample number, sequencing depth per individual and variant caller on population SNP frequency estimates. For Pool-seq data, we compared frequency estimates from two SNP callers, VarScan and Snape; the former employs a frequentist SNP calling approach while the latter uses a Bayesian approach. Results revealed concordance correlation coefficients well above 0.8, confirming that Pool-seq is a valid method for acquiring population-level SNP frequency data. Higher accuracy was achieved by pooling more samples (25 compared to 14) and working with higher sequencing depth (4.1× per individual compared to 1.4× per individual), which increased the concordance correlation coefficient to 0.955. The Bayesian-based SNP caller produced somewhat higher concordance correlation coefficients, particularly at low sequencing depth. We recommend pooling at least 25 individuals combined with sequencing at a depth of 100× to produce satisfactory frequency estimates for common SNPs (minor allele frequency above 0.05).
NASA Astrophysics Data System (ADS)
Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.
2012-12-01
Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.
Nondestructive estimation of depth of surface opening cracks in concrete beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arne, Kevin; In, Chiwon; Kurtis, Kimberly
Concrete is one of the most widely used construction materials and thus assessment of damage in concrete structures is of the utmost importance from both a safety point of view and a financial point of view. Of particular interest are surface opening cracks that extend through the concrete cover, as this can expose the steel reinforcement bars underneath and induce corrosion in them. This corrosion can lead to significant subsequent damage in concrete such as cracking and delamination of the cover concrete as well as rust staining on the surface of concrete. Concrete beams are designed and constructed in suchmore » a way to provide crack depths up to around 13 cm. Two different types of measurements are made in-situ to estimate depths of real surface cracks (as opposed to saw-cut notches) after unloading: one based on the impact-echo method and the other one based on the diffuse ultrasonic method. These measurements are compared to the crack depth visually observed on the sides of the beams. Discussions are given as to the advantages and disadvantages of each method.« less
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Measurement and Estimation of Riverbed Scour in a Mountain River
NASA Astrophysics Data System (ADS)
Song, L. A.; Chan, H. C.; Chen, B. A.
2016-12-01
Mountains are steep with rapid flows in Taiwan. After installing a structure in a mountain river, scour usually occurs around the structure because of the high energy gradient. Excessive scouring has been reported as one of the main causes of failure of river structures. The scouring disaster related to the flood can be reduced if the riverbed variation can be properly evaluated based on the flow conditions. This study measures the riverbed scour by using an improved "float-out device". Scouring and hydrodynamic data were simultaneously collected in the Mei River, Nantou County located in central Taiwan. The semi-empirical models proposed by previous researchers were used to estimate the scour depths based on the measured flow characteristics. The differences between the measured and estimated scour depths were discussed. Attempts were then made to improve the estimating results by developing a semi-empirical model to predict the riverbed scour based on the local field data. It is expected to setup a warning system of river structure safety by using the flow conditions. Keywords: scour, model, float-out device
Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization
Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun
2015-01-01
The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306
Automatic Focusing for a 675 GHz Imaging Radar with Target Standoff Distances from 14 to 34 Meters
NASA Technical Reports Server (NTRS)
Tang, Adrian; Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Siegel, Peter H.
2013-01-01
This paper dicusses the issue of limited focal depth for high-resolution imaging radar operating over a wide range of standoff distances. We describe a technique for automatically focusing a THz imaging radar system using translational optics combined with range estimation based on a reduced chirp bandwidth setting. The demonstarted focusing algorithm estimates the correct focal depth for desired targets in the field of view at unknown standoffs and in the presence of clutter to provide good imagery at 14 to 30 meters of standoff.
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
Rock Cutting Depth Model Based on Kinetic Energy of Abrasive Waterjet
NASA Astrophysics Data System (ADS)
Oh, Tae-Min; Cho, Gye-Chun
2016-03-01
Abrasive waterjets are widely used in the fields of civil and mechanical engineering for cutting a great variety of hard materials including rocks, metals, and other materials. Cutting depth is an important index to estimate operating time and cost, but it is very difficult to predict because there are a number of influential variables (e.g., energy, geometry, material, and nozzle system parameters). In this study, the cutting depth is correlated to the maximum kinetic energy expressed in terms of energy (i.e., water pressure, water flow rate, abrasive feed rate, and traverse speed), geometry (i.e., standoff distance), material (i.e., α and β), and nozzle system parameters (i.e., nozzle size, shape, and jet diffusion level). The maximum kinetic energy cutting depth model is verified with experimental test data that are obtained using one type of hard granite specimen for various parameters. The results show a unique curve for a specific rock type in a power function between cutting depth and maximum kinetic energy. The cutting depth model developed here can be very useful for estimating the process time when cutting rock using an abrasive waterjet.
Nolan, B.T.; Campbell, D.L.; Senterfit, R.M.
1998-01-01
A geophysical survey was conducted to determine the depth of the base of the water-table aquifer in the southern part of Jackson Hole, Wyoming, USA. Audio-magnetotellurics (AMT) measurements at 77 sites in the study area yielded electrical-resistivity logs of the subsurface, and these were used to infer lithologic changes with depth. A 100-600 ohm-m geoelectric layer, designated the Jackson aquifer, was used to represent surficial saturated, unconsolidated deposits of Quaternary age. The median depth of the base of the Jackson aquifer is estimated to be 200 ft (61 m), based on 62 sites that had sufficient resistivity data. AMT-measured values were kriged to predict the depth to the base of the aquifer throughout the southern part of Jackson Hole. Contour maps of the kriging predictions indicate that the depth of the base of the Jackson aquifer is shallow in the central part of the study area near the East and West Gros Ventre Buttes, deeper in the west near the Teton fault system, and shallow at the southern edge of Jackson Hole. Predicted, contoured depths range from 100 ft (30 m) in the south, near the confluences of Spring Creek and Flat Creek with the Snake River, to 700 ft (210 m) in the west, near the town of Wilson, Wyoming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less
NASA Astrophysics Data System (ADS)
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; Turner, David D.; Eloranta, Edwin W.
2017-06-01
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookup table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation (R2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21 µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; ...
2017-06-09
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, E.
1989-05-01
A storm transposition approach is investigated as a possible tool of assessing the frequency of extreme precipitation depths, that is, depths of return period much greater than 100 years. This paper focuses on estimation of the annual exceedance probability of extreme average precipitation depths over a catchment. The probabilistic storm transposition methodology is presented, and the several conceptual and methodological difficulties arising in this approach are identified. The method is implemented and is partially evaluated by means of a semihypothetical example involving extreme midwestern storms and two hypothetical catchments (of 100 and 1000 mi2 (˜260 and 2600 km2)) located in central Iowa. The results point out the need for further research to fully explore the potential of this approach as a tool for assessing the probabilities of rare storms, and eventually floods, a necessary element of risk-based analysis and design of large hydraulic structures.
Bultman, Mark W.; Page, William R.
2016-10-31
The upper Santa Cruz Basin is an important groundwater basin containing the regional aquifer for the city of Nogales, Arizona. This report provides data and interpretations of data aimed at better understanding the bedrock morphology and structure of the upper Santa Cruz Basin study area which encompasses the Rio Rico and Nogales 1:24,000-scale U.S. Geological Survey quadrangles. Data used in this report include the Arizona Aeromagnetic and Gravity Maps and Data referred to here as the 1996 Patagonia Aeromagnetic survey, Bouguer gravity anomaly data, and conductivity-depth transforms (CDTs) from the 1998 Santa Cruz transient electromagnetic survey (whose data are included in appendixes 1 and 2 of this report).Analyses based on magnetic gradients worked well to identify the range-front faults along the Mt. Benedict horst block, the location of possibly fault-controlled canyons to the west of Mt. Benedict, the edges of buried lava flows, and numerous other concealed faults and contacts. Applying the 1996 Patagonia aeromagnetic survey data using the horizontal gradient method produced results that were most closely correlated with the observed geology.The 1996 Patagonia aeromagnetic survey was used to estimate depth to bedrock in the upper Santa Cruz Basin study area. Three different depth estimation methods were applied to the data: Euler deconvolution, horizontal gradient magnitude, and analytic signal. The final depth to bedrock map was produced by choosing the maximum depth from each of the three methods at a given location and combining all maximum depths. In locations of rocks with a known reversed natural remanent magnetic field, gravity based depth estimates from Gettings and Houser (1997) were used.The depth to bedrock map was supported by modeling aeromagnetic anomaly data along six profiles. These cross sectional models demonstrated that by using the depth to bedrock map generated in this study, known and concealed faults, measured and estimated magnetic susceptibilities of rocks found in the study area, and estimated natural remanent magnetic intensities and directions, reasonable geologic models can be built. This indicates that the depth to bedrock map is reason-able and geologically possible.Finally, CDTs derived from the 1998 Santa Cruz Basin transient electromagnetic survey were used to help identify basin structure and some physical properties of the basin fill in the study area. The CDTs also helped to confirm depth to bedrock estimates in the Santa Cruz Basin, in particular a region of elevated bedrock in the area of Potrero Canyon, and a deep basin in the location of the Arizona State Highway 82 microbasin. The CDTs identified many concealed faults in the study area and possibly indicate deep water-saturated clay-rich sediments in the west-central portion of the study area. These sediments grade to more sand-rich saturated sediments to the south with relatively thick, possibly unsaturated, sediments at the surface. Also, the CDTs may indicate deep saturated clay-rich sediments in the Highway 82 microbasin and in the Mount Benedict horst block from Proto Canyon south to the international border.
Bouligand, C.; Glen, J.M.G.; Blakely, R.J.
2009-01-01
We have revisited the problem of mapping depth to the Curie temperature isotherm from magnetic anomalies in an attempt to provide a measure of crustal temperatures in the western United States. Such methods are based on the estimation of the depth to the bottom of magnetic sources, which is assumed to correspond to the temperature at which rocks lose their spontaneous magnetization. In this study, we test and apply a method based on the spectral analysis of magnetic anomalies. Early spectral analysis methods assumed that crustal magnetization is a completely uncorrelated function of position. Our method incorporates a more realistic representation where magnetization has a fractal distribution defined by three independent parameters: the depths to the top and bottom of magnetic sources and a fractal parameter related to the geology. The predictions of this model are compatible with radial power spectra obtained from aeromagnetic data in the western United States. Model parameters are mapped by estimating their value within a sliding window swept over the study area. The method works well on synthetic data sets when one of the three parameters is specified in advance. The application of this method to western United States magnetic compilations, assuming a constant fractal parameter, allowed us to detect robust long-wavelength variations in the depth to the bottom of magnetic sources. Depending on the geologic and geophysical context, these features may result from variations in depth to the Curie temperature isotherm, depth to the mantle, depth to the base of volcanic rocks, or geologic settings that affect the value of the fractal parameter. Depth to the bottom of magnetic sources shows several features correlated with prominent heat flow anomalies. It also shows some features absent in the map of heat flow. Independent geophysical and geologic data sets are examined to determine their origin, thereby providing new insights on the thermal and geologic crustal structure of the western United States.
A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light
Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning
2017-01-01
Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759
NASA Astrophysics Data System (ADS)
Girard, Catherine; Dufour, Anne-Béatrice; Charruault, Anne-Lise; Renaud, Sabrina
2018-01-01
Benthic foraminifera have been used as proxies for various paleoenvironmental variables such as food availability, carbon flux from surface waters, microhabitats, and indirectly water depth. Estimating assemblage composition based on morphotypes, as opposed to genus- or species-level identification, potentially loses important ecological information but opens the way to the study of ancient time periods. However, the ability to accurately constrain benthic foraminiferal assemblages has been questioned when the most abundant foraminifera are fragile agglutinated forms, particularly prone to fragmentation. Here we test an alternate method for accurately estimating the composition of fragmented assemblages. The cumulated area per morphotype
method is assessed, i.e., the sum of the area of all tests or fragments of a given morphotype in a sample. The percentage of each morphotype is calculated as a portion of the total cumulated area. Percentages of different morphotypes based on counting and cumulated area methods are compared one by one and analyzed using principal component analyses, a co-inertia analysis, and Shannon diversity indices. Morphotype percentages are further compared to an estimate of water depth based on microfacies description. Percentages of the morphotypes are not related to water depth. In all cases, counting and cumulated area methods deliver highly similar results, suggesting that the less time-consuming traditional counting method may provide robust estimates of assemblages. The size of each morphotype may deliver paleobiological information, for instance regarding biomass, but should be considered carefully due to the pervasive issue of fragmentation.
NASA Astrophysics Data System (ADS)
Smyth, Robyn L.; Akan, Cigdem; Tejada-Martínez, Andrés.; Neale, Patrick J.
2017-07-01
Southern Ocean phytoplankton assemblages acclimated to low-light environments that result from deep mixing are often sensitive to ultraviolet and high photosynthetically available radiation. In such assemblages, exposures to inhibitory irradiance near the surface result in loss of photosynthetic capacity that is not rapidly recovered and can depress photosynthesis after transport below depths penetrated by inhibitory irradiance. We used a coupled biophysical modeling approach to quantify the reduction in primary productivity due to photoinhibition based upon experiments and observations made during the spring bloom in Ross Sea Polynya (RSP). Large eddy simulation (LES) was used to generate depth trajectories representative of observed Langmuir circulation that were passed through an underwater light field to yield time series of spectral irradiance representative of what phytoplankton would have experienced in situ. These were used to drive an assemblage-specific photosynthesis-irradiance model with inhibition determined from a biological weighting function and repair rate estimated from shipboard experiments on the local assemblage. We estimate the daily depth-integrated productivity was 230 mmol C m-2. This estimate includes a 6-7% reduction in daily depth-integrated productivity over potential productivity (i.e., effects of photoinhibition excluded). When trajectory depths were fixed (no vertical transport), the reduction in productivity was nearly double. Relative to LES estimates, there was slightly less depth-integrated photoinhibition with random walk trajectories and nearly twice as much with circular rotations. This suggests it is important to account for turbulence when simulating the effects of vertical mixing on photoinhibition due to the kinetics of photodamage and repair.
Shipboard Acoustic Current Profiling during the Coastal Ocean Dynamics Experiment,
1985-05-01
average profile based on the bottori depth estimated from the ship’s posit ion. in the CODEU region. an efficient computer routine was developed for... forex ~and and( port ward comnport ent s of V. at conistant z ., the depth Iill ships coordi- nlatv (’S(Chap 2). The data cort- from I -mintIe
Clow, David W.; Nanus, Leora; Verdin, Kristine L.; Schmidt, Jeffrey
2012-01-01
The National Weather Service's Snow Data Assimilation (SNODAS) program provides daily, gridded estimates of snow depth, snow water equivalent (SWE), and related snow parameters at a 1-km2 resolution for the conterminous USA. In this study, SNODAS snow depth and SWE estimates were compared with independent, ground-based snow survey data in the Colorado Rocky Mountains to assess SNODAS accuracy at the 1-km2 scale. Accuracy also was evaluated at the basin scale by comparing SNODAS model output to snowmelt runoff in 31 headwater basins with US Geological Survey stream gauges. Results from the snow surveys indicated that SNODAS performed well in forested areas, explaining 72% of the variance in snow depths and 77% of the variance in SWE. However, SNODAS showed poor agreement with measurements in alpine areas, explaining 16% of the variance in snow depth and 30% of the variance in SWE. At the basin scale, snowmelt runoff was moderately correlated (R2 = 0.52) with SNODAS model estimates. A simple method for adjusting SNODAS SWE estimates in alpine areas was developed that uses relations between prevailing wind direction, terrain, and vegetation to account for wind redistribution of snow in alpine terrain. The adjustments substantially improved agreement between measurements and SNODAS estimates, with the R2 of measured SWE values against SNODAS SWE estimates increasing from 0.42 to 0.63 and the root mean square error decreasing from 12 to 6 cm. Results from this study indicate that SNODAS can provide reliable data for input to moderate-scale to large-scale hydrologic models, which are essential for creating accurate runoff forecasts. Refinement of SNODAS SWE estimates for alpine areas to account for wind redistribution of snow could further improve model performance. Published 2011. This article is a US Government work and is in the public domain in the USA.
Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P
2018-01-01
The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.
NASA Astrophysics Data System (ADS)
Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.
2016-12-01
We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.
Wang, Min Zheng; Zhou, Guang Sheng
2016-06-01
Soil moisture is an important component of the soil-vegetation-atmosphere continuum (SPAC). It is a key factor to determine the water status of terrestrial ecosystems, and is also the main source of water supply for crops. In order to estimate soil moisture at different soil depths at a station scale, based on the energy balance equation and the water deficit index (WDI), a soil moisture estimation model was established in terms of the remote sensing data (the normalized difference vegetation index and surface temperature) and air temperature. The soil moisture estimation model was validated based on the data from the drought process experiment of summer maize (Zea mays) responding to different irrigation treatments carried out during 2014 at Gucheng eco-agrometeorological experimental station of China Meteorological Administration. The results indicated that the soil moisture estimation model developed in this paper was able to evaluate soil relative humidity at different soil depths in the summer maize field, and the hypothesis was reasonable that evapotranspiration deficit ratio (i.e., WDI) linearly depended on soil relative humidity. It showed that the estimation accuracy of 0-10 cm surface soil moisture was the highest (R 2 =0.90). The RMAEs of the estimated and measured soil relative humidity in deeper soil layers (up to 50 cm) were less than 15% and the RMSEs were less than 20%. The research could provide reference for drought monitoring and irrigation management.
Geophysical mapping of palsa peatland permafrost
NASA Astrophysics Data System (ADS)
Sjöberg, Y.; Marklund, P.; Pettersson, R.; Lyon, S. W.
2014-10-01
Permafrost peatlands are hydrological and biogeochemical hotspots in the discontinuous permafrost zone. Non-intrusive geophysical methods offer possibility to map current permafrost spatial distributions in these environments. In this study, we estimate the depths to the permafrost table surface and base across a peatland in northern Sweden, using ground penetrating radar and electrical resistivity tomography. Seasonal thaw frost tables (at ~0.5 m depth), taliks (2.1-6.7 m deep), and the permafrost base (at ~16 m depth) could be detected. Higher occurrences of taliks were discovered at locations with a lower relative height of permafrost landforms indicative of lower ground ice content at these locations. These results highlight the added value of combining geophysical techniques for assessing spatial distribution of permafrost within the rapidly changing sporadic permafrost zone. For example, based on a simple thought experiment for the site considered here, we estimated that the thickest permafrost could thaw out completely within the next two centuries. There is a clear need, thus, to benchmark current permafrost distributions and characteristics particularly in under studied regions of the pan-arctic.
Geophysical mapping of palsa peatland permafrost
NASA Astrophysics Data System (ADS)
Sjöberg, Y.; Marklund, P.; Pettersson, R.; Lyon, S. W.
2015-03-01
Permafrost peatlands are hydrological and biogeochemical hotspots in the discontinuous permafrost zone. Non-intrusive geophysical methods offer a possibility to map current permafrost spatial distributions in these environments. In this study, we estimate the depths to the permafrost table and base across a peatland in northern Sweden, using ground penetrating radar and electrical resistivity tomography. Seasonal thaw frost tables (at ~0.5 m depth), taliks (2.1-6.7 m deep), and the permafrost base (at ~16 m depth) could be detected. Higher occurrences of taliks were discovered at locations with a lower relative height of permafrost landforms, which is indicative of lower ground ice content at these locations. These results highlight the added value of combining geophysical techniques for assessing spatial distributions of permafrost within the rapidly changing sporadic permafrost zone. For example, based on a back-of-the-envelope calculation for the site considered here, we estimated that the permafrost could thaw completely within the next 3 centuries. Thus there is a clear need to benchmark current permafrost distributions and characteristics, particularly in under studied regions of the pan-Arctic.
RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information
Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun
2016-01-01
In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-05-18
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.
Hemsley, Victoria S; Smyth, Timothy J; Martin, Adrian P; Frajka-Williams, Eleanor; Thompson, Andrew F; Damerell, Gillian; Painter, Stuart C
2015-10-06
An autonomous underwater vehicle (Seaglider) has been used to estimate marine primary production (PP) using a combination of irradiance and fluorescence vertical profiles. This method provides estimates for depth-resolved and temporally evolving PP on fine spatial scales in the absence of ship-based calibrations. We describe techniques to correct for known issues associated with long autonomous deployments such as sensor calibration drift and fluorescence quenching. Comparisons were made between the Seaglider, stable isotope ((13)C), and satellite estimates of PP. The Seaglider-based PP estimates were comparable to both satellite estimates and stable isotope measurements.
NASA Astrophysics Data System (ADS)
Castellarin, A.; Montanari, A.; Brath, A.
2002-12-01
The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
Magnetic Basement Depth Inversion in the Space Domain
NASA Astrophysics Data System (ADS)
Nunes, Tiago Mane; Barbosa, Valéria Cristina F.; Silva, João Batista C.
2008-10-01
We present a total-field anomaly inversion method to determine both the basement relief and the magnetization direction (inclination and declination) of a 2D sedimentary basin presuming negligible sediment magnetization. Our method assumes that the magnetic intensity contrast is constant and known. We use a nonspectral approach based on approximating the vertical cross section of the sedimentary basin by a polygon, whose uppermost vertices are forced to coincide with the basin outcrop, which are presumably known. For fixed values of the x coordinates our method estimates the z coordinates of the unknown polygon vertices. To obtain the magnetization direction we assume that besides the total-field anomaly, information about the basement’s outcrops at the basin borders and the basement depths at a few points is available. To obtain stable depth-to-basement estimates we impose overall smoothness and positivity constraints on the parameter estimates. Tests on synthetic data showed that the simultaneous estimation of the irregular basement relief and the magnetization direction yields good estimates for the relief despite the mild instability in the magnetization direction. The inversion of aeromagnetic data from the onshore Almada Basin, Brazil, revealed a shallow, eastward-dipping basement basin.
Siberia snow depth climatology derived from SSM/I data using a combined dynamic and static algorithm
Grippa, M.; Mognard, N.; Le, Toan T.; Josberger, E.G.
2004-01-01
One of the major challenges in determining snow depth (SD) from passive microwave measurements is to take into account the spatiotemporal variations of the snow grain size. Static algorithms based on a constant snow grain size cannot provide accurate estimates of snow pack thickness, particularly over large regions where the snow pack is subjected to big spatial temperature variations. A recent dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from the Special Sensor Microwave/Imager (SSM/I) over the Northern Great Plains (NGP) in the US. In this paper, we develop a combined dynamic and static algorithm to estimate snow depth from 13 years of SSM/I observations over Central Siberia. This region is characterised by extremely cold surface air temperatures and by the presence of permafrost that significantly affects the ground temperature. The dynamic algorithm is implemented to take into account these effects and it yields accurate snow depths early in the winter, when thin snowpacks combine with cold air temperatures to generate rapid crystal growth. However, it is not applicable later in the winter when the grain size growth slows. Combining the dynamic algorithm to a static algorithm, with a temporally constant but spatially varying coefficient, we obtain reasonable snow depth estimates throughout the entire snow season. Validation is carried out by comparing the satellite snow depth monthly averages to monthly climatological data. We show that the location of the snow depth maxima and minima is improved when applying the combined algorithm, since its dynamic portion explicitly incorporate the thermal gradient through the snowpack. The results obtained are presented and evaluated for five different vegetation zones of Central Siberia. Comparison with in situ measurements is also shown and discussed. ?? 2004 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried
2017-09-01
Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes
Sampaio, Renato Coral; Vargas, José A. R.
2018-01-01
The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments. PMID:29570698
Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes.
Bestard, Guillermo Alvarez; Sampaio, Renato Coral; Vargas, José A R; Alfaro, Sadek C Absi
2018-03-23
The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments.
Estimation of effective soil hydraulic properties at field scale via ground albedo neutron sensing
NASA Astrophysics Data System (ADS)
Rivera Villarreyes, C. A.; Baroni, G.; Oswald, S. E.
2012-04-01
Upscaling of soil hydraulic parameters is a big challenge in hydrological research, especially in model applications of water and solute transport processes. In this contest, numerous attempts have been made to optimize soil hydraulic properties using observations of state variables such as soil moisture. However, in most of the cases the observations are limited at the point-scale and then transferred to the model scale. In this way inherent small-scale soil heterogeneities and non-linearity of dominate processes introduce sources of error that can produce significant misinterpretation of hydrological scenarios and unrealistic predictions. On the other hand, remote-sensed soil moisture over large areas is also a new promising approach to derive effective soil hydraulic properties over its observation footprint, but it is still limited to the soil surface. In this study we present a new methodology to derive soil moisture at the intermediate scale between point-scale observations and estimations at the remote-sensed scale. The data are then used for the estimation of effective soil hydraulic parameters. In particular, ground albedo neutron sensing (GANS) was used to derive non-invasive soil water content in a footprint of ca. 600 m diameter and a depth of few decimeters. This approach is based on the crucial role of hydrogen compared to other landscape materials as neutron moderator. As natural neutron measured aboveground depends on soil water content, the vertical footprint of the GANS method, i.e. its penetration depth, does also. Firstly, this study was designed to evaluate the dynamics of GANS vertical footprint and derive a mathematical model for its prediction. To test GANS-soil moisture and its penetration depth, it was accompanied by other soil moisture measurements (FDR) located at 5, 20 and 40 cm depths over the GANS horizontal footprint in a sunflower field (Brandenburg, Germany). Secondly, a HYDRUS-1D model was set up with monitored values of crop height and meteorological variables as input during a four-month period. Parameter estimation (PEST) software was coupled to HYDRUS-1D in order to calibrate soil hydraulic properties based on soil water content data. Thirdly, effective soil hydraulic properties were derived from GANS-soil moisture. Our observations show the potential of GANS to compensate the lack of information at the intermediate scale, soil water content estimation and effective soil properties. Despite measurement volumes, GANS-derived soil water content compared quantitatively to FDRs at several depths. For one-hour estimations, root mean square error was estimated as 0.019, 0.029 and 0.036 m3/m3 for 5 cm, 20 cm and 40 cm depths, respectively. In the context of soil hydraulic properties, this first application of GANS method succeed and its estimations were comparable to those derived by other approaches.
NASA Technical Reports Server (NTRS)
Redemann, J.; Livingston, J.; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; LeBlanc, S.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.;
2014-01-01
We have developed a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. We compare the spatio-temporal distribution of our multi-sensor aerosol retrievals and calculations of seasonal clear-sky aerosol radiative forcing based on the aerosol retrievals to values derived from four models that participated in the latest AeroCom model intercomparison initiative. We find significant inter-model differences, in particular for the aerosol single scattering albedo, which can be evaluated using the multi-sensor A-Train retrievals. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.
Depth interval estimates from motion parallax and binocular disparity beyond interaction space.
Gillam, Barbara; Palmisano, Stephen A; Govan, Donovan G
2011-01-01
Static and dynamic observers provided binocular and monocular estimates of the depths between real objects lying well beyond interaction space. On each trial, pairs of LEDs were presented inside a dark railway tunnel. The nearest LED was always 40 m from the observer, with the depth separation between LED pairs ranging from 0 up to 248 m. Dynamic binocular viewing was found to produce the greatest (ie most veridical) estimates of depth magnitude, followed next by static binocular viewing, and then by dynamic monocular viewing. (No significant depth was seen with static monocular viewing.) We found evidence that both binocular and monocular dynamic estimates of depth were scaled for the observation distance when the ground plane and walls of the tunnel were visible up to the nearest LED. We conclude that both motion parallax and stereopsis provide useful long-distance depth information and that motion-parallax information can enhance the degree of stereoscopic depth seen.
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector
2018-01-01
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644
NASA Astrophysics Data System (ADS)
Kazama, Yoriko; Yamamoto, Tomonori
2017-10-01
Bathymetry at shallow water especially shallower than 15m is an important area for environmental monitoring and national defense. Because the depth of shallow water is changeable by the sediment deposition and the ocean waves, the periodic monitoring at shoe area is needed. Utilization of satellite images are well matched for widely and repeatedly monitoring at sea area. Sea bottom terrain model using by remote sensing data have been developed and these methods based on the radiative transfer model of the sun irradiance which is affected by the atmosphere, water, and sea bottom. We adopted that general method of the sea depth extraction to the satellite imagery, WorldView-2; which has very fine spatial resolution (50cm/pix) and eight bands at visible to near-infrared wavelengths. From high-spatial resolution satellite images, there is possibility to know the coral reefs and the rock area's detail terrain model which offers important information for the amphibious landing. In addition, the WorldView-2 satellite sensor has the band at near the ultraviolet wavelength that is transmitted through the water. On the other hand, the previous study showed that the estimation error by the satellite imagery was related to the sea bottom materials such as sand, coral reef, sea alga, and rocks. Therefore, in this study, we focused on sea bottom materials, and tried to improve the depth estimation accuracy. First, we classified the sea bottom materials by the SVM method, which used the depth data acquired by multi-beam sonar as supervised data. Then correction values in the depth estimation equation were calculated applying the classification results. As a result, the classification accuracy of sea bottom materials was 93%, and the depth estimation error using the correction by the classification result was within 1.2m.
NASA Astrophysics Data System (ADS)
Takeda, T.; Yano, T. E.; Shiomi, K.
2013-12-01
The highly-developed active fault evaluation is necessary particularly at the Kanto metropolitan area, where multiple major active fault zones exist. The cutoff depth of active faults is one of important parameters since it is a good indicator to define fault dimensions and hence its maximum expected magnitude. The depth is normally estimated from microseismicity, thermal structure, and depths of Curie point and Conrad discontinuity. For instance, Omuralieva et al. (2012) has estimated the cutoff depths of the whole Japan by creating a 3-D relocated hypocenter catalog. However its spatial resolution could be insufficient for the robustness of the active faults evaluation since precision within 15 km that is comparable to the minimum evaluated fault size is preferred. Therefore the spatial resolution of the earthquake catalog to estimate the cutoff depth is required to be smaller than 15 km. This year we launched the Japan Unified hIgh-resolution relocated Catalog for Earthquakes (JUICE) Project (Yano et al., this fall meeting), of which objective is to create precise and reliable earthquake catalog for all of Japan, using waveform cross-correlation data and Double-Difference relocation method (Waldhauser and Ellsworth, 2000). This catalog has higher precision of hypocenter determination than the routine one. In this study, we estimate high-resolution cutoff depths of seismogenic layer using this catalog of the Kanto region where preliminary JUICE analysis has been already done. D90, the cutoff depths which contain 90% of the occuring earthquake is often used as a reference to understand the seismogenic layer. The reason of choosing 90% is because it relies on uncertainties based on the amount of depth errors of hypocenters.. In this study we estimate D95 because more precise and reliable catalog is now available by the JUICE project. First we generate 10 km equally spaced grid in our study area. Second we pick hypocenters within a radius of 10 km from each grid point and arrange into hypocenter groups. Finally we estimate D95 from the hypocenter groups at each grid point. During the analysis we use three conditions; (1) the depths of the hypocenters we used are less than 25 km; (2) the minimum number of the hypocenter group is 25; and (3) low frequency earthquakes are excluded. Our estimate of D95 shows undulated and fine features, such as having a different profile along the same fault. This can be seen at two major fault zones: (1) Tachikawa fault zone, and (2) the northwest marginal fault zone of the Kanto basin. The D95 gets deeper from northwest to southwest along these fault zones, , suggesting that the constant cutoff depth cannot be used even along the same fault zone. One of patters of our D95 shows deeper in the south Kanto region. The reason for this pattern could be that hypocenters we used in this study may be contaminated by seismicity near the plate boundary between the Philippine Sea plate and the Eurasian plate. Therefore we should carefully interpret D95 in the south Kanto.
Tectonic history of the Syria Planum province of Mars
Tanaka, K.L.; Davis, P.A.
1988-01-01
We attribute most of the development of extensive fractures in the Tharsis region to discrete tectonic provinces within the region, rather than to Tharsis as a single entity. One of these provinces is in Syria Planum. Faults and collapse structures in the Syria Planum tectonic province on Mars are grouped into 13 sets based on relative age, areal distribution, and morphology. According to superposition and fault crosscutting relations and crater counts we designate six distinct episodes of tectonic activity. Photoclinometric topographic profiles across 132 grabens and fault scarps show that Syria Planum grabens have widths (average of 2.5 km, and most range from 1 to 6 km) similar to lunar grabens, but the Martian grabens have slightly higher side walls (average abour 132 m) and gentler wall slopes (average of 9?? and range of 2??-25??) than lunar grabens (93 m high and 18?? slopes). Estimates of the amount of extension for individual grabens range from 20 to 350 m; most estimates of the thickness of the faulted layer range from 0.5 to 4.5 km (average is 1.5 km). This thickness range corresponds closely to the 0.8- to 3.6-km range in depth for pits, troughs, and canyons in Noctis Labyrinthus and along the walls of Valles Marineris. We propose that the predominant 1- to 1.5-km values obtained for both the thickness of the faulted layer and the depths of the pits, troughs, and theater heads of the canyons reflect the initial depth to the water table in this region, as governed by the depth to the base of ground ice. Maximum depths for these features may indicate lowered groundwater table depths and the base of ejecta material. -from Authors
Roncali, Emilie; Phipps, Jennifer E; Marcu, Laura; Cherry, Simon R
2012-10-21
In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2×2×20 mm(3) phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors.
Roncali, Emilie; Phipps, Jennifer E.; Marcu, Laura; Cherry, Simon R.
2012-01-01
In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2 × 2 × 20 mm3 phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors. PMID:23010690
NASA Astrophysics Data System (ADS)
Zhou, Shuai; Huang, Danian
2015-11-01
We have developed a new method for the interpretation of gravity tensor data based on the generalized Tilt-depth method. Cooper (2011, 2012) extended the magnetic Tilt-depth method to gravity data. We take the gradient-ratio method of Cooper (2011, 2012) and modify it so that the source type does not need to be specified a priori. We develop the new method by generalizing the Tilt-depth method for depth estimation for different types of source bodies. The new technique uses only the three vertical tensor components of the full gravity tensor data observed or calculated at different height plane to estimate the depth of the buried bodies without a priori specification of their structural index. For severely noise-corrupted data, our method utilizes different upward continuation height data, which can effectively reduce the influence of noise. Theoretical simulations of the gravity source model with and without noise illustrate the ability of the method to provide source depth information. Additionally, the simulations demonstrate that the new method is simple, computationally fast and accurate. Finally, we apply the method using the gravity data acquired over the Humble Salt Dome in the USA as an example. The results show a good correspondence to the previous drilling and seismic interpretation results.
Space shuttle propulsion estimation development verification
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The application of extended Kalman filtering to estimating the Space Shuttle Propulsion performance, i.e., specific impulse, from flight data in a post-flight processing computer program is detailed. The flight data used include inertial platform acceleration, SRB head pressure, SSME chamber pressure and flow rates, and ground based radar tracking data. The key feature in this application is the model used for the SRB's, which is a nominal or reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are also included for an integrated system model. Assuming uncertainty within the propulsion system model and attempts to estimate its deviations represent a new application of parameter estimation for rocket powered vehicles. Illustrations from the results of applying this estimation approach to several missions show good quality propulsion estimates.
NASA Astrophysics Data System (ADS)
Riedel, M.; Collett, T. S.
2017-07-01
A global inventory of data from gas hydrate drilling expeditions is used to develop relationships between the base of structure I gas hydrate stability, top of gas hydrate occurrence, sulfate-methane transition depth, pressure (water depth), and geothermal gradients. The motivation of this study is to provide first-order estimates of the top of gas hydrate occurrence and associated thickness of the gas hydrate occurrence zone for climate-change scenarios, global carbon budget analyses, or gas hydrate resource assessments. Results from publically available drilling campaigns (21 expeditions and 52 drill sites) off Cascadia, Blake Ridge, India, Korea, South China Sea, Japan, Chile, Peru, Costa Rica, Gulf of Mexico, and Borneo reveal a first-order linear relationship between the depth to the top and base of gas hydrate occurrence. The reason for these nearly linear relationships is believed to be the strong pressure and temperature dependence of methane solubility in the absence of large difference in thermal gradients between the various sites assessed. In addition, a statistically robust relationship was defined between the thickness of the gas hydrate occurrence zone and the base of gas hydrate stability (in meters below seafloor). The relationship developed is able to predict the depth of the top of gas hydrate occurrence zone using observed depths of the base of gas hydrate stability within less than 50 m at most locations examined in this study. No clear correlation of the depth to the top and base of gas hydrate occurrences with geothermal gradient and sulfate-methane transition depth was identified.
Riedel, Michael; Collett, Timothy S.
2017-01-01
A global inventory of data from gas hydrate drilling expeditions is used to develop relationships between the base of structure I gas hydrate stability, top of gas hydrate occurrence, sulfate-methane transition depth, pressure (water depth), and geothermal gradients. The motivation of this study is to provide first-order estimates of the top of gas hydrate occurrence and associated thickness of the gas hydrate occurrence zone for climate-change scenarios, global carbon budget analyses, or gas hydrate resource assessments. Results from publically available drilling campaigns (21 expeditions and 52 drill sites) off Cascadia, Blake Ridge, India, Korea, South China Sea, Japan, Chile, Peru, Costa Rica, Gulf of Mexico, and Borneo reveal a first-order linear relationship between the depth to the top and base of gas hydrate occurrence. The reason for these nearly linear relationships is believed to be the strong pressure and temperature dependence of methane solubility in the absence of large difference in thermal gradients between the various sites assessed. In addition, a statistically robust relationship was defined between the thickness of the gas hydrate occurrence zone and the base of gas hydrate stability (in meters below seafloor). The relationship developed is able to predict the depth of the top of gas hydrate occurrence zone using observed depths of the base of gas hydrate stability within less than 50 m at most locations examined in this study. No clear correlation of the depth to the top and base of gas hydrate occurrences with geothermal gradient and sulfate-methane transition depth was identified.
Inverse geothermal modelling applied to Danish sedimentary basins
NASA Astrophysics Data System (ADS)
Poulsen, Søren E.; Balling, Niels; Bording, Thue S.; Mathiesen, Anders; Nielsen, Søren B.
2017-10-01
This paper presents a numerical procedure for predicting subsurface temperatures and heat-flow distribution in 3-D using inverse calibration methodology. The procedure is based on a modified version of the groundwater code MODFLOW by taking advantage of the mathematical similarity between confined groundwater flow (Darcy's law) and heat conduction (Fourier's law). Thermal conductivity, heat production and exponential porosity-depth relations are specified separately for the individual geological units of the model domain. The steady-state temperature model includes a model-based transient correction for the long-term palaeoclimatic thermal disturbance of the subsurface temperature regime. Variable model parameters are estimated by inversion of measured borehole temperatures with uncertainties reflecting their quality. The procedure facilitates uncertainty estimation for temperature predictions. The modelling procedure is applied to Danish onshore areas containing deep sedimentary basins. A 3-D voxel-based model, with 14 lithological units from surface to 5000 m depth, was built from digital geological maps derived from combined analyses of reflection seismic lines and borehole information. Matrix thermal conductivity of model lithologies was estimated by inversion of all available deep borehole temperature data and applied together with prescribed background heat flow to derive the 3-D subsurface temperature distribution. Modelled temperatures are found to agree very well with observations. The numerical model was utilized for predicting and contouring temperatures at 2000 and 3000 m depths and for two main geothermal reservoir units, the Gassum (Lower Jurassic-Upper Triassic) and Bunter/Skagerrak (Triassic) reservoirs, both currently utilized for geothermal energy production. Temperature gradients to depths of 2000-3000 m are generally around 25-30 °C km-1, locally up to about 35 °C km-1. Large regions have geothermal reservoirs with characteristic temperatures ranging from ca. 40-50 °C, at 1000-1500 m depth, to ca. 80-110 °C, at 2500-3500 m, however, at the deeper parts, most likely, with too low permeability for non-stimulated production.
Observations of Strong Surface Radar Ducts over the Persian Gulf.
NASA Astrophysics Data System (ADS)
Brooks, Ian M.; Goroch, Andreas K.; Rogers, David P.
1999-09-01
Ducting of microwave radiation is a common phenomenon over the oceans. The height and strength of the duct are controlling factors for radar propagation and must be determined accurately to assess propagation ranges. A surface evaporation duct commonly forms due to the large gradient in specific humidity just above the sea surface; a deeper surface-based or elevated duct frequently is associated with the sudden change in temperature and humidity across the boundary layer inversion.In April 1996 the U.K. Meteorological Office C-130 Hercules research aircraft took part in the U.S. Navy Ship Antisubmarine Warfare Readiness/Effectiveness Measuring exercise (SHAREM-115) in the Persian Gulf by providing meteorological support and making measurements for the study of electromagnetic and electro-optical propagation. The boundary layer structure over the Gulf is influenced strongly by the surrounding desert landmass. Warm dry air flows from the desert over the cooler waters of the Gulf. Heat loss to the surface results in the formation of a stable internal boundary layer. The layer evolves continuously along wind, eventually forming a new marine atmospheric boundary layer. The stable stratification suppresses vertical mixing, trapping moisture within the layer and leading to an increase in refractive index and the formation of a strong boundary layer duct. A surface evaporation duct coexists with the boundary layer duct.In this paper the authors present aircraft- and ship-based observations of both the surface evaporation and boundary layer ducts. A series of sawtooth aircraft profiles map the boundary layer structure and provide spatially distributed estimates of the duct depth. The boundary layer duct is found to have considerable spatial variability in both depth and strength, and to evolve along wind over distances significant to naval operations (100 km). The depth of the evaporation duct is derived from a bulk parameterization based on Monin-Obukhov similarity theory using near-surface data taken by the C-130 during low-level (30 m) flight legs and by ship-based instrumentation. Good agreement is found between the two datasets. The estimated evaporation ducts are found to be generally uniform in depth; however, localized regions of greatly increased depth are observed on one day, and a marked change in boundary layer structure resulting in merging of the surface evaporation duct with the deeper boundary layer duct was observed on another. Both of these cases occurred within exceptionally shallow boundary layers (100 m), where the mean evaporation duct depths were estimated to be between 12 and 17 m. On the remaining three days the boundary layer depth was between 200 and 300 m, and evaporation duct depths were estimated to be between 20 and 35 m, varying by just a few meters over ranges of up to 200 km.The one-way radar propagation factor is modeled for a case with a pronounced change in duct depth. The case is modeled first with a series of measured profiles to define as accurately as possible the refractivity structure of the boundary layer, then with a single profile collocated with the radar antenna and assuming homogeneity. The results reveal large errors in the propagation factor when derived from a single profile.
Learning-based saliency model with depth information.
Ma, Chih-Yao; Hang, Hsueh-Ming
2015-01-01
Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.
Duncan C. Lutes; Robert E. Keane
2006-01-01
The Fuel Load method (FL) is used to sample dead and down woody debris, determine depth of the duff/ litter profile, estimate the proportion of litter in the profile, and estimate total vegetative cover and dead vegetative cover. Down woody debris (DWD) is sampled using the planar intercept technique based on the methodology developed by Brown (1974). Pieces of dead...
An observationally constrained estimate of global dust aerosol optical depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, David A.; Heald, Colette L.; Kok, Jasper F.
Here, the role of mineral dust in climate and ecosystems has been largely quantified using global climate and chemistry model simulations of dust emission, transport, and deposition. However, differences between these model simulations are substantial, with estimates of global dust aerosol optical depth (AOD) that vary by over a factor of 5. Here we develop an observationally based estimate of the global dust AOD, using multiple satellite platforms, in situ AOD observations and four state-of-the-science global models over 2004–2008. We estimate that the global dust AOD at 550 nm is 0.030 ± 0.005 (1σ), higher than the AeroCom model medianmore » (0.023) and substantially narrowing the uncertainty. The methodology used provides regional, seasonal dust AOD and the associated statistical uncertainty for key dust regions around the globe with which model dust schemes can be evaluated. Exploring the regional and seasonal differences in dust AOD between our observationally based estimate and the four models in this study, we find that emissions in Africa are often overrepresented at the expense of Asian and Middle Eastern emissions and that dust removal appears to be too rapid in most models.« less
An observationally constrained estimate of global dust aerosol optical depth
Ridley, David A.; Heald, Colette L.; Kok, Jasper F.; ...
2016-12-06
Here, the role of mineral dust in climate and ecosystems has been largely quantified using global climate and chemistry model simulations of dust emission, transport, and deposition. However, differences between these model simulations are substantial, with estimates of global dust aerosol optical depth (AOD) that vary by over a factor of 5. Here we develop an observationally based estimate of the global dust AOD, using multiple satellite platforms, in situ AOD observations and four state-of-the-science global models over 2004–2008. We estimate that the global dust AOD at 550 nm is 0.030 ± 0.005 (1σ), higher than the AeroCom model medianmore » (0.023) and substantially narrowing the uncertainty. The methodology used provides regional, seasonal dust AOD and the associated statistical uncertainty for key dust regions around the globe with which model dust schemes can be evaluated. Exploring the regional and seasonal differences in dust AOD between our observationally based estimate and the four models in this study, we find that emissions in Africa are often overrepresented at the expense of Asian and Middle Eastern emissions and that dust removal appears to be too rapid in most models.« less
NASA Astrophysics Data System (ADS)
Didenko, A. N.; Nosyrev, M. Yu.; Shevchenko, B. F.; Gilmanova, G. Z.
2017-11-01
The depth of the base of the magnetoactive layer and the geothermal gradient in the Sikhote Alin crust are estimated based on a method determining the Curie depth point of magnetoactive masses by using spectral analysis of the anomalous magnetic field. A detailed map of the geothermal gradient is constructed for the first time for the Sikhote Alin and adjacent areas of the Central Asian belt. Analysis of this map shows that the zones with a higher geothermal gradient geographically fit the areas with a higher level of seismicity.
Depth detection in interactive projection system based on one-shot black-and-white stripe pattern.
Zhou, Qian; Qiao, Xiaorui; Ni, Kai; Li, Xinghui; Wang, Xiaohao
2017-03-06
A novel method enabling estimation of not only the screen surface as the conventional one, but the depth information from two-dimensional coordinates in an interactive projection system was proposed in this research. In this method, a one-shot black-and-white stripe pattern from a projector is projected on a screen plane, where the deformed pattern is captured by a charge-coupled device camera. An algorithm based on object/shadow simultaneous detection is proposed for fulfillment of the correspondence. The depth information of the object is then calculated using the triangulation principle. This technology provides a more direct feeling of virtual interaction in three dimensions without using auxiliary equipment or a special screen as interaction proxies. Simulation and experiments are carried out and the results verified the effectiveness of this method in depth detection.
Comparative biomass structure and estimated carbon flow in food webs in the deep Gulf of Mexico
NASA Astrophysics Data System (ADS)
Rowe, Gilbert T.; Wei, Chihlin; Nunnally, Clifton; Haedrich, Richard; Montagna, Paul; Baguley, Jeffrey G.; Bernhard, Joan M.; Wicksten, Mary; Ammons, Archie; Briones, Elva Escobar; Soliman, Yousra; Deming, Jody W.
2008-12-01
A budget of the standing stocks and cycling of organic carbon associated with the sea floor has been generated for seven sites across a 3-km depth gradient in the NE Gulf of Mexico, based on a series of reports by co-authors on specific biotic groups or processes. The standing stocks measured at each site were bacteria, Foraminifera, metazoan meiofauna, macrofauna, invertebrate megafauna, and demersal fishes. Sediment community oxygen consumption (SCOC) by the sediment-dwelling organisms was measured at each site using a remotely deployed benthic lander, profiles of oxygen concentration in the sediment pore water of recovered cores and ship-board core incubations. The long-term incorporation and burial of organic carbon into the sediments has been estimated using profiles of a combination of stable and radiocarbon isotopes. The total stock estimates, carbon burial, and the SCOC allowed estimates of living and detrital carbon residence time within the sediments, illustrating that the total biota turns over on time scales of months on the upper continental slope but this is extended to years on the abyssal plain at 3.6 km depth. The detrital carbon turnover is many times longer, however, over the same depths. A composite carbon budget illustrates that total carbon biomass and associated fluxes declined precipitously with increasing depth. Imbalances in the carbon budgets suggest that organic detritus is exported from the upper continental slope to greater depths offshore. The respiration of each individual "size" or functional group within the community has been estimated from allometric models, supplemented by direct measurements in the laboratory. The respiration and standing stocks were incorporated into budgets of carbon flow through and between the different size groups in hypothetical food webs. The decline in stocks and respiration with depth were more abrupt in the larger forms (fishes and megafauna), resulting in an increase in the relative predominance of smaller sizes (bacteria and meiofauna) at depth. Rates and stocks in the deep northern GoM appeared to be comparable to other continental margins where similar comparisons have been made.
Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona
Bultman, Mark W.
2015-01-01
Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.
Bayesian depth estimation from monocular natural images.
Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C
2017-05-01
Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.
Gohier, Francis; Dellimore, Kiran; Scheffer, Cornie
2013-01-01
The quality of cardiopulmonary resuscitation (CPR) is often inconsistent and frequently fails to meet recommended guidelines. One promising approach to address this problem is for clinicians to use an active feedback device during CPR. However, one major deficiency of existing feedback systems is that they fail to account for the displacement of the back support surface during chest compression (CC), which can be important when CPR is performed on a soft surface. In this study we present the development of a real-time CPR feedback system based on an algorithm which uses force and dual-accelerometer measurements to provide accurate estimation of the CC depth on a soft surface, without assuming full chest decompression. Based on adult CPR manikin tests it was found that the accuracy of the estimated CC depth for a dual accelerometer feedback system is significantly better (7.3% vs. 24.4%) than for a single accelerometer system on soft back support surfaces, in the absence or presence of a backboard. In conclusion, the algorithm used was found to be suitable for a real-time, dual accelerometer CPR feedback application since it yielded reasonable accuracy in terms of CC depth estimation, even when used on a soft back support surface.
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
Depth inpainting by tensor voting.
Kulkarni, Mandar; Rajagopalan, Ambasamudram N
2013-06-01
Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.
NASA Astrophysics Data System (ADS)
Abrehdary, M.; Sjöberg, L. E.; Bagherbandi, M.; Sampietro, D.
2017-12-01
We present a combined method for estimating a new global Moho model named KTH15C, containing Moho depth and Moho density contrast (or shortly Moho parameters), from a combination of global models of gravity (GOCO05S), topography (DTM2006) and seismic information (CRUST1.0 and MDN07) to a resolution of 1° × 1° based on a solution of Vening Meinesz-Moritz' inverse problem of isostasy. This paper also aims modelling of the observation standard errors propagated from the Vening Meinesz-Moritz and CRUST1.0 models in estimating the uncertainty of the final Moho model. The numerical results yield Moho depths ranging from 6.5 to 70.3 km, and the estimated Moho density contrasts ranging from 21 to 650 kg/m3, respectively. Moreover, test computations display that in most areas estimated uncertainties in the parameters are less than 3 km and 50 kg/m3, respectively, but they reach to more significant values under Gulf of Mexico, Chile, Eastern Mediterranean, Timor sea and parts of polar regions. Comparing the Moho depths estimated by KTH15C and those derived by KTH11C, GEMMA2012C, CRUST1.0, KTH14C, CRUST14 and GEMMA1.0 models shows that KTH15C agree fairly well with CRUST1.0 but rather poor with other models. The Moho density contrasts estimated by KTH15C and those of the KTH11C, KTH14C and VMM model agree to 112, 31 and 61 kg/m3 in RMS. The regional numerical studies show that the RMS differences between KTH15C and Moho depths from seismic information yields fits of 2 to 4 km in South and North America, Africa, Europe, Asia, Australia and Antarctica, respectively.
NASA Astrophysics Data System (ADS)
Comsa, Daria Craita
2008-10-01
There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.
NASA Astrophysics Data System (ADS)
Beyer, Matthias; Gaj, Marcel; Königer, Paul; Tulimeveva Hamutoko, Josefina; Wanke, Heike; Wallner, Markus; Himmelsbach, Thomas
2018-03-01
The estimation of groundwater recharge in water-limited environments is challenging due to climatic conditions, the occurrence of deep unsaturated zones, and specialized vegetation. We critically examined two methods based on stable isotopes of soil water: (i) the interpretation of natural isotope depth-profiles and subsequent approximation of recharge using empirical relationships and (ii) the use of deuterium-enriched water (2H2O) as tracer. Numerous depth-profiles were measured directly in the field in semiarid Namibia using a novel in-situ technique. Additionally, 2H2O was injected into the soil and its displacement over a complete rainy season monitored. Estimated recharge ranges between 0 and 29 mm/y for three rainy seasons experiencing seasonal rainfall of 660 mm (2013/14), 313 mm (2014/15) and 535 mm (2015/16). The results of this study fortify the suitability of water stable isotope-based approaches for recharge estimation and highlight enormous potential for future studies of water vapor transport and ecohydrological processes.
Guo, J.; Tsang, L.; Josberger, E.G.; Wood, A.W.; Hwang, J.-N.; Lettenmaier, D.P.
2003-01-01
This paper presents an algorithm that estimates the spatial distribution and temporal evolution of snow water equivalent and snow depth based on passive remote sensing measurements. It combines the inversion of passive microwave remote sensing measurements via dense media radiative transfer modeling results with snow accumulation and melt model predictions to yield improved estimates of snow depth and snow water equivalent, at a pixel resolution of 5 arc-min. In the inversion, snow grain size evolution is constrained based on pattern matching by using the local snow temperature history. This algorithm is applied to produce spatial snow maps of Upper Rio Grande River basin in Colorado. The simulation results are compared with that of the snow accumulation and melt model and a linear regression method. The quantitative comparison with the ground truth measurements from four Snowpack Telemetry (SNOTEL) sites in the basin shows that this algorithm is able to improve the estimation of snow parameters.
NASA Astrophysics Data System (ADS)
Houpert, Loïc; Testor, Pierre; Durrieu de Madron, Xavier; Estournel, Claude; D'Ortenzio, Fabrizio
2013-04-01
Heat fluxes across the ocean-atmosphere interface play a crucial role in the upper turbulent mixing. The depth reached by this turbulent mixing is indicated by an homogenization of seawater properties in the surface layer, and is defined as the Mixed Layer Depth (MLD). The thickness of the mixed layer determines also the heat content of the layer that directly interacts with the atmosphere. The seasonal variability of these air-sea fluxes is crucial in the calculation of heat budget. An improvement in the estimate of these fluxes is needed for a better understanding of the Mediterranean ocean circulation and climate, in particular in Regional Climate Models. There are few estimations of surface heat fluxes based on oceanic observations in the Mediterranean, and none of them are based on mixed layer observations. So, we proposed here new estimations of these upper-ocean heat fluxes based on mixed layer. We present high resolution Mediterranean climatology (0.5°) of the mean MLD based on a comprehensive collection of temperature profiles of last 43 years (1969-2012). The database includes more than 150,000 profiles, merging CTD, XBT, ARGO Profiling floats, and gliders observations. This dataset is first used to describe the seasonal cycle of the mixed layer depth on the whole Mediterranean on a monthly climatological basis. Our analysis discriminates several regions with coherent behaviors, in particular the deep water formation sites, characterized by significant differences in the winter mixing intensity. Heat storage rates (HSR) were calculated as the time rate of change of the heat content integrated from the surface down to a specific depth that is defined as the MLD plus an integration constant. Monthly climatology of net heat flux (NHF) from ERA-Interim reanalysis was balanced by the 1°x1° resolution heat storage rate climatology. Local heat budget balance and seasonal variability in the horizontal heat flux are then discussed by taking into account uncertainties, due to errors in monthly value estimation and to intra-annual and inter-annual variability.
Constraining Basin Depth and Fault Displacement in the Malombe Basin Using Potential Field Methods
NASA Astrophysics Data System (ADS)
Beresh, S. C. M.; Elifritz, E. A.; Méndez, K.; Johnson, S.; Mynatt, W. G.; Mayle, M.; Atekwana, E. A.; Laó-Dávila, D. A.; Chindandali, P. R. N.; Chisenga, C.; Gondwe, S.; Mkumbwa, M.; Kalaguluka, D.; Kalindekafe, L.; Salima, J.
2017-12-01
The Malombe Basin is part of the Malawi Rift which forms the southern part of the Western Branch of the East African Rift System. At its southern end, the Malawi Rift bifurcates into the Bilila-Mtakataka and Chirobwe-Ntcheu fault systems and the Lake Malombe Rift Basin around the Shire Horst, a competent block under the Nankumba Peninsula. The Malombe Basin is approximately 70km from north to south and 35km at its widest point from east to west, bounded by reversing-polarity border faults. We aim to constrain the depth of the basin to better understand displacement of each border fault. Our work utilizes two east-west gravity profiles across the basin coupled with Source Parameter Imaging (SPI) derived from a high-resolution aeromagnetic survey. The first gravity profile was done across the northern portion of the basin and the second across the southern portion. Gravity and magnetic data will be used to constrain basement depths and the thickness of the sedimentary cover. Additionally, Shuttle Radar Topography Mission (SRTM) data is used to understand the topographic expression of the fault scarps. Estimates for minimum displacement of the border faults on either side of the basin were made by adding the elevation of the scarps to the deepest SPI basement estimates at the basin borders. Our preliminary results using SPI and SRTM data show a minimum displacement of approximately 1.3km for the western border fault; the minimum displacement for the eastern border fault is 740m. However, SPI merely shows the depth to the first significantly magnetic layer in the subsurface, which may or may not be the actual basement layer. Gravimetric readings are based on subsurface density and thus circumvent issues arising from magnetic layers located above the basement; therefore expected results for our work will be to constrain more accurate basin depth by integrating the gravity profiles. Through more accurate basement depth estimates we also gain more accurate displacement estimates for the Basin's faults. Not only do the improved depth estimates serve as a proxy to the viability of hydrocarbon exploration efforts in the region, but the improved displacement estimates also provide a better understanding of extension accommodation within the Malawi Rift.
Technique for estimating depth of 100-year floods in Tennessee
Gamble, Charles R.; Lewis, James G.
1977-01-01
Preface: A method is presented for estimating the depth of the loo-year flood in four hydrologic areas in Tennessee. Depths at 151 gaging stations on streams that were not significantly affected by man made changes were related to basin characteristics by multiple regression techniques. Equations derived from the analysis can be used to estimate the depth of the loo-year flood if the size of the drainage basin is known.
A global reference model of Moho depths based on WGM2012
NASA Astrophysics Data System (ADS)
Zhou, D.; Li, C.
2017-12-01
The crust-mantle boundary (Moho discontinuity) represents the largest density contrast in the lithosphere, which can be detected by Bouguer gravity anomaly. We present our recent inversion of global Moho depths from World Gravity Map 2012. Because oceanic lithospheres increase in density as they cool, we perform thermal correction based on the plate cooling model. We adopt a temperature Tm=1300°C at the bottom of lithosphere. The plate thickness is tested by varying by 5 km from 90 to 140 km, and taken as 130 km that gives a best-fit crustal thickness constrained by seismic crustal thickness profiles. We obtain the residual Bouguer gravity anomalies by subtracting the thermal correction from WGM2012, and then estimate Moho depths based on the Parker-Oldenburg algorithm. Taking the global model Crust1.0 as a priori constraint, we adopt Moho density contrasts of 0.43 and 0.4 g/cm3 , and initial mean Moho depths of 37 and 20 km in the continental and oceanic domains, respectively. The number of iterations in the inversion is set to be 150, which is large enough to obtain an error lower than a pre-assigned convergence criterion. The estimated Moho depths range between 0 76 km, and are averaged at 36 and 15 km in continental and oceanic domain, respectively. Our results correlate very well with Crust1.0 with differences mostly within ±5.0 km. Compared to the low resolution of Crust1.0 in oceanic domain, our results have a much larger depth range reflecting diverse structures such as ridges, seamounts, volcanic chains and subduction zones. Base on this model, we find that young(<5 Ma) oceanic crust thicknesses show dependence on spreading rates: (1) From ultraslow (<4mm/yr) to slow (4 45mm/yr) spreading ridges, the thicknesses increase dramatically; (2)From slow to fast (45 95mm/yr) spreading ridges , the thickness decreases slightly; (3) For the super-fast ridges (>95mm/yr) we observe relatively thicker crust. Conductive cooling of lithosphere may constrain the melting of the mantle at ultraslow spreading centers. Lower mantle temperatures indicated by deeper Curie depths at slow and fast spreading ridges may decrease the volume of magmatism and crustal thickness. This new global model of gravity-derived Moho depth, combined with geochemical and Curie point depth, can be used to investigate thermal evolution of lithosphere.
Asquith, William H.; Roussel, Meghan C.; Cleveland, Theodore G.; Fang, Xing; Thompson, David B.
2006-01-01
The design of small runoff-control structures, from simple floodwater-detention basins to sophisticated best-management practices, requires the statistical characterization of rainfall as a basis for cost-effective, risk-mitigated, hydrologic engineering design. The U.S. Geological Survey, in cooperation with the Texas Department of Transportation, has developed a framework to estimate storm statistics including storm interevent times, distributions of storm depths, and distributions of storm durations for eastern New Mexico, Oklahoma, and Texas. The analysis is based on hourly rainfall recorded by the National Weather Service. The database contains more than 155 million hourly values from 774 stations in the study area. Seven sets of maps depicting ranges of mean storm interevent time, mean storm depth, and mean storm duration, by county, as well as tables listing each of those statistics, by county, were developed. The mean storm interevent time is used in probabilistic models to assess the frequency distribution of storms. The Poisson distribution is suggested to model the distribution of storm occurrence, and the exponential distribution is suggested to model the distribution of storm interevent times. The four-parameter kappa distribution is judged as an appropriate distribution for modeling the distribution of both storm depth and storm duration. Preference for the kappa distribution is based on interpretation of L-moment diagrams. Parameter estimates for the kappa distributions are provided. Separate dimensionless frequency curves for storm depth and duration are defined for eastern New Mexico, Oklahoma, and Texas. Dimension is restored by multiplying curve ordinates by the mean storm depth or mean storm duration to produce quantile functions of storm depth and duration. Minimum interevent time and location have slight influence on the scale and shape of the dimensionless frequency curves. Ten example problems and solutions to possible applications are provided.
3D depth-to-basement and density contrast estimates using gravity and borehole data
NASA Astrophysics Data System (ADS)
Barbosa, V. C.; Martins, C. M.; Silva, J. B.
2009-05-01
We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.
Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.
Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.
Flynn, Robert H.
1997-01-01
year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ivanoff, Michael A.; Medalie, Laura
1997-01-01
year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Shape-from-focus by tensor voting.
Hariharan, R; Rajagopalan, A N
2012-07-01
In this correspondence, we address the task of recovering shape-from-focus (SFF) as a perceptual organization problem in 3-D. Using tensor voting, depth hypotheses from different focus operators are validated based on their likelihood to be part of a coherent 3-D surface, thereby exploiting scene geometry and focus information to generate reliable depth estimates. The proposed method is fast and yields significantly better results compared with existing SFF methods.
Haines, S.S.; Pidlisecky, Adam; Knight, R.
2009-01-01
With the goal of improving the understanding of the subsurface structure beneath the Harkins Slough recharge pond in Pajaro Valley, California, USA, we have undertaken a multimodal approach to develop a robust velocity model to yield an accurate seismic reflection section. Our shear-wave reflection section helps us identify and map an important and previously unknown flow barrier at depth; it also helps us map other relevant structure within the surficial aquifer. Development of an accurate velocity model is essential for depth conversion and interpretation of the reflection section. We incorporate information provided by shear-wave seismic methods along with cone penetrometer testing and seismic cone penetrometer testing measurements. One velocity model is based on reflected and refracted arrivals and provides reliable velocity estimates for the full depth range of interest when anchored on interface depths determined from cone data and borehole drillers' logs. A second velocity model is based on seismic cone penetrometer testing data that provide higher-resolution ID velocity columns with error estimates within the depth range of the cone penetrometer testing. Comparison of the reflection/refraction model with the seismic cone penetrometer testing model also suggests that the mass of the cone truck can influence velocity with the equivalent effect of approximately one metre of extra overburden stress. Together, these velocity models and the depth-converted reflection section result in a better constrained hydrologic model of the subsurface and illustrate the pivotal role that cone data can provide in the reflection processing workflow. ?? 2009 European Association of Geoscientists & Engineers.
Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed
Balk, B.; Elder, K.; Baron, Jill S.
1998-01-01
Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff. In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado. Geostatistics and classical statistics were used to estimate SWE distribution across the watershed. Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances. Snow densities were spatially modeled through regression analysis. Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE. The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths. Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.
NASA Astrophysics Data System (ADS)
Webster, C.; Bühler, Y.; Schirmer, M.; Stoffel, A.; Giulia, M.; Jonas, T.
2017-12-01
Snow depth distribution in forests exhibits strong spatial heterogeneity compared to adjacent open sites. Measurement of snow depths in forests is currently limited to a) manual point measurements, which are sparse and time-intensive, b) ground-penetrating radar surveys, which have limited spatial coverage, or c) airborne LiDAR acquisition, which are expensive and may deteriorate in denser forests. We present the application of unmanned aerial vehicles in combination with structure-from-motion (SfM) methods to photogrammetrically map snow depth distribution in forested terrain. Two separate flights were carried out 10 days apart across a heterogeneous forested area of 900 x 500 m. Corresponding snow depth maps were derived using both, LiDAR-based and SfM-based DTM data, obtained during snow-off conditions. Manual measurements collected following each flight were used to validate the snow depth maps. Snow depths were resolved at 5cm resolution and forest snow depth distribution structures such as tree wells and other areas of preferential melt were represented well. Differential snow depth maps showed maximum ablation in the exposed south sides of trees and smaller differences in the centre of gaps and on the north side of trees. This new application of SfM to map snow depth distribution in forests demonstrates a straightforward method for obtaining information that was previously only available through manual spatially limited ground-based measurements. These methods could therefore be extended to more frequent observation of snow depths in forests as well as estimating snow accumulation and depletion rates.
Buckwalter, T.F.; Squillace, P.J.
1995-01-01
Hydrologic data were evaluated from four areas of western Pennsylvania to estimate the minimum depth of well surface casing needed to prevent contamination of most of the fresh ground-water resources by oil and gas wells. The areas are representative of the different types of oil and gas activities and of the ground-water hydrology of most sections of the Appalachian Plateaus Physiographic Province in western Pennsylvania. Approximate delineation of the base of the fresh ground-water system was attempted by interpreting the following hydrologic data: (1) reports of freshwater and saltwater in oil and gas well-completion reports, (2) water well-completion reports, (3) geophysical logs, and (4) chemical analyses of well water. Because of the poor quality and scarcity of ground-water data, the altitude of the base of the fresh ground-water system in the four study areas cannot be accurately delineated. Consequently, minimum surface-casing depths for oil and gas wells cannot be estimated with confidence. Conscientious and reliable reporting of freshwater and saltwater during drilling of oil and gas wells would expand the existing data base. Reporting of field specific conductance of ground water would greatly enhance the value of the reports of ground water in oil and gas well-completion records. Water-bearing zones in bedrock are controlled mostly by the presence of secondary openings. The vertical and horizontal discontinuity of secondary openings may be responsible, in part, for large differences in altitudes of freshwater zones noted on completion records of adjacent oil and gas wells. In upland and hilltop topographies, maximum depths of fresh ground water are reported from several hundred feet below land surface to slightly more than 1,000 feet, but the few deep reports are not substantiated by results of laboratory analyses of dissolved-solids concentrations. Past and present drillers for shallow oil and gas wells commonly install surface casing to below the base of readily observed fresh ground water. Casing depths are selected generally to maximize drilling efficiency and to stop freshwater from entering the well and subsequently interfering with hydrocarbon recovery. The depths of surface casing generally are not selected with ground-water protection in mind. However, on the basis of existing hydrologic data, most freshwater aquifers generally are protected with current casing depths. Minimum surface-casing depths for deep gas wells are prescribed by Pennsylvania Department of Environmental Resources regulations and appear to be adequate to prevent ground-water contamination, in most respects, for the only study area with deep gas fields examined in Crawford County.
Smart textile plasmonic fiber dew sensors.
Esmaeilzadeh, Hamid; Rivard, Maxime; Arzi, Ezatollah; Légaré, François; Hassani, Alireza
2015-06-01
We propose a novel Surface Plasmon Resonance (SPR)-based sensor that detects dew formation in optical fiber-based smart textiles. The proposed SPR sensor facilitates the observation of two phenomena: condensation of moisture and evaporation of water molecules in air. This sensor detects dew formation in less than 0.25 s, and determines dew point temperature with an accuracy of 4%. It can be used to monitor water layer depth changes during dew formation and evaporation in the range of a plasmon depth probe, i.e., 250 nm, with a resolution of 7 nm. Further, it facilitates estimation of the relative humidity of a medium over a dynamic range of 30% to 70% by measuring the evaporation time via the plasmon depth probe.
Wang, Zhaojun; Cai, Yanan; Liang, Yansheng; Zhou, Xing; Yan, Shaohui; Dan, Dan; Bianco, Piero R.; Lei, Ming; Yao, Baoli
2017-01-01
A wide-field fluorescence microscope with a double-helix point spread function (PSF) is constructed to obtain the specimen’s three-dimensional distribution with a single snapshot. Spiral-phase-based computer-generated holograms (CGHs) are adopted to make the depth-of-field of the microscope adjustable. The impact of system aberrations on the double-helix PSF at high numerical aperture is analyzed to reveal the necessity of the aberration correction. A modified cepstrum-based reconstruction scheme is promoted in accordance with properties of the new double-helix PSF. The extended depth-of-field images and the corresponding depth maps for both a simulated sample and a tilted section slice of bovine pulmonary artery endothelial (BPAE) cells are recovered, respectively, verifying that the depth-of-field is properly extended and the depth of the specimen can be estimated at a precision of 23.4nm. This three-dimensional fluorescence microscope with a framerate-rank time resolution is suitable for studying the fast developing process of thin and sparsely distributed micron-scale cells in extended depth-of-field. PMID:29296483
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puckett, T.M.
1991-05-01
The presence of abundant and diverse sighted ostracodes in chalk and marl of the Demopolis Chalk (Campanian and Maastrichtian) in Alabama and Mississippi strongly suggests that the Late Cretaceous sea floor was within the photic zone. The maximum depth of deposition is calculated from an equation based on eye morphology and efficiency and estimates of the vertical light attenuation. In this equation, K, the vertical light attenuation coefficient, is the most critical variable because it is the divisor for the rest of the equation. Rates of accumulation of coccoliths during the Cretaceous are estimated and are on the same ordermore » as those in modern areas of high phytoplankton production, suggesting similar pigment and coccolith concentrations in the water column. Values of K are known for a wide range of water masses and pigment concentrations, including areas of high phytoplankton production; thus light attenuation through the Cretaceous seas can be estimated reliably. Waters in which attenuation is due only to biogenic matter-conditions that result in deposition of relatively pure chalk-have values of K ranging between 0.2 and 0.3. Waters rich in phytoplankton and mud-conditions that result in deposition of marl-have K values as great as 0.5. Substituting these values for K results in depth range of 65 to 90 m for deposition of chalk and depth of 35 m for deposition of marl. These depth values suggest that deposition of many Cretaceous chalks and marls around the world were deposited under relatively shallow conditions.« less
Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models
Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir
2016-01-01
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570
NASA Astrophysics Data System (ADS)
Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
Size matters: Perceived depth magnitude varies with stimulus height.
Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S
2016-06-01
Both the upper and lower disparity limits for stereopsis vary with the size of the targets. Recently, Tsirlin, Wilcox, and Allison (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of a stimulus. To test this hypothesis we compared apparent depth in small discs to depth in long bars with equivalent width and disparity. We used three estimation techniques: a virtual ruler, a touch-sensor (for haptic estimates) and a disparity probe. We found that depth estimates were significantly larger for the bar stimuli than for the disc stimuli for all methods of estimation and different configurations. In a second experiment, we measured perceived depth as a function of the height of the bar and the radius of the disc. Perceived depth increased with increasing bar height and disc radius suggesting that disparity is integrated along the vertical edges. We discuss size-disparity correlation and inter-neural excitatory connections as potential mechanisms that could account for these results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uncertainty in cloud optical depth estimates made from satellite radiance measurements
NASA Technical Reports Server (NTRS)
Pincus, Robert; Szczodrak, Malgorzata; Gu, Jiujing; Austin, Philip
1995-01-01
The uncertainty in optical depths retrieved from satellite measurements of visible wavelength radiance at the top of the atmosphere is quantified. Techniques are briefly reviewed for the estimation of optical depth from measurements of radiance, and it is noted that these estimates are always more uncertain at greater optical depths and larger solar zenith angles. The lack of radiometric calibration for visible wavelength imagers on operational satellites dominates the uncertainty retrievals of optical depth. This is true for both single-pixel retrievals and for statistics calculated from a population of individual retrievals. For individual estimates or small samples, sensor discretization can also be significant, but the sensitivity of the retrieval to the specification of the model atmosphere is less important. The relative uncertainty in calibration affects the accuracy with which optical depth distributions measured by different sensors may be quantitatively compared, while the absolute calibration uncertainty, acting through the nonlinear mapping of radiance to optical depth, limits the degree to which distributions measured by the same sensor may be distinguished.
Updates to Enhanced Geothermal System Resource Potential Estimate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad
The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less
Update to Enhanced Geothermal System Resource Potential Estimate: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad
2016-10-01
The deep EGS electricity generation resource potential estimate maintained by the National Renewable Energy Laboratory was updated using the most recent temperature-at-depth maps available from the Southern Methodist University Geothermal Laboratory. The previous study dates back to 2011 and was developed using the original temperature-at-depth maps showcased in the 2006 MIT Future of Geothermal Energy report. The methodology used to update the deep EGS resource potential is the same as in the previous study and is summarized in the paper. The updated deep EGS resource potential estimate was calculated for depths between 3 and 7 km and is binned inmore » 25 degrees C increments. The updated deep EGS electricity generation resource potential estimate is 4,349 GWe. A comparison of the estimates from the previous and updated studies shows a net increase of 117 GWe in the 3-7 km depth range, due mainly to increases in the underlying temperature-at-depth estimates from the updated maps.« less
Estimation of m.w.e (meter water equivalent) depth of the salt mine of Slanic Prahova, Romania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitrica, B.; Margineanu, R.; Stoica, S.
2010-11-24
A new mobile detector was developed in IFIN-HH, Romania, for measuring muon flux at surface and in underground. The measurements have been performed in the salt mines of Slanic Prahova, Romania. The muon flux was determined for 2 different galleries of the Slanic mine at different depths. In order to test the stability of the method, also measurements of the muon flux at surface at different altitudes were performed. Based on the results, the depth of the 2 galleries was established at 610 and 790 m.w.e. respectively.
Terai, C. R.; Klein, S. A.; Zelinka, M. D.
2016-08-26
The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIPmore » simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.« less
High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.
Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S
2018-03-05
A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terai, C. R.; Klein, S. A.; Zelinka, M. D.
The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIPmore » simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.« less
Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries (Open Access)
2014-09-05
RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS 1 Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries S. Hussain Raza1...electronic forms. ar X iv :1 51 0. 07 31 7v 1 [ cs .C V ] 2 5 O ct 2 01 5 2 RAZA ET AL .: DEPTH EXTRACTION FROM VIDEOS Frame Ground Truth Depth...temporal segmentation using the method proposed by Grundmann et al . [4]. estimation and triangulation to estimate depth maps [17, 27](see Figure 1). In
Shahzad, Muhammad I; Nichol, Janet E; Wang, Jun; Campbell, James R; Chan, Pak W
2013-09-01
Hong Kong's surface visibility has decreased in recent years due to air pollution from rapid social and economic development in the region. In addition to deteriorating health standards, reduced visibility disrupts routine civil and public operations, most notably transportation and aviation. Regional estimates of visibility solved operationally using available ground and satellite-based estimates of aerosol optical properties and vertical distribution may prove more effective than standard reliance on a few existing surface visibility monitoring stations. Previous studies have demonstrated that such satellite measurements correlate well with near-surface optical properties, despite these sensors do not consider range-resolved information and indirect parameterizations necessary to solve relevant parameters. By expanding such analysis to include vertically resolved aerosol profile information from an autonomous ground-based lidar instrument, this work develops six models for automated assessment of surface visibility. Regional visibility is estimated using co-incident ground-based lidar, sun photometer visibility meter and MODerate-resolution maging Spectroradiometer (MODIS) aerosol optical depth data sets. Using a 355 nm extinction coefficient profile solved from the lidar MODIS AOD (aerosol optical depth) is scaled down to the surface to generate a regional composite depiction of surface visibility. These results demonstrate the potential for applying passive satellite depictions of broad-scale aerosol optical properties together with a ground-based surface lidar and zenith-viewing sun photometer for improving quantitative assessments of visibility in a city such as Hong Kong.
A deep learning approach for pose estimation from volumetric OCT data.
Gessert, Nils; Schlüter, Matthias; Schlaefer, Alexander
2018-05-01
Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Linking soil type and rainfall characteristics towards estimation of surface evaporative capacitance
NASA Astrophysics Data System (ADS)
Or, D.; Bickel, S.; Lehmann, P.
2017-12-01
Separation of evapotranspiration (ET) to evaporation (E) and transpiration (T) components for attribution of surface fluxes or for assessment of isotope fractionation in groundwater remains a challenge. Regional estimates of soil evaporation often rely on plant-based (Penman-Monteith) ET estimates where is E is obtained as a residual or a fraction of potential evaporation. We propose a novel method for estimating E from soil-specific properties, regional rainfall characteristics and considering concurrent internal drainage that shelters soil water from evaporation. A soil-dependent evaporative characteristic length defines a depth below which soil water cannot be pulled to the surface by capillarity; this depth determines the maximal soil evaporative capacitance (SEC). The SEC is recharged by rainfall and subsequently emptied by competition between drainage and surface evaporation (considering canopy interception evaporation). We show that E is strongly dependent on rainfall characteristics (mean annual, number of storms) and soil textural type, with up to 50% of rainfall lost to evaporation in loamy soil. The SEC concept applied to different soil types and climatic regions offers direct bounds on regional surface evaporation independent of plant-based parameterization or energy balance calculations.
Brosten, Troy R.; Day-Lewis, Frederick D.; Schultz, Gregory M.; Curtis, Gary P.; Lane, John W.
2011-01-01
Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of − 0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)–ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~ 0.5 m followed by a gradual correlation loss of 90% at 2.3 m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter–receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0 ± 0.5 m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation.
Brosten, T.R.; Day-Lewis, F. D.; Schultz, G.M.; Curtis, G.P.; Lane, J.W.
2011-01-01
Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of -0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)-ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~0.5m followed by a gradual correlation loss of 90% at 2.3m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter-receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0??0.5m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation. ?? 2011.
An Improved Method for Seismic Event Depth and Moment Tensor Determination: CTBT Related Application
NASA Astrophysics Data System (ADS)
Stachnik, J.; Rozhkov, M.; Baker, B.
2016-12-01
According to the Protocol to CTBT, International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event. Determination of seismic event source mechanism and its depth is a part of these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. We show preliminary results using the latter approach from an improved software design and applied on a moderately powered computer. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK 2009, 2013 and 2016 events and shallow earthquakes using a new implementation of waveform fitting of teleseismic P waves. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Moment tensors for DPRK events show isotropic percentages greater than 50%. Depth estimates for the DPRK events range from 1.0-1.4 km. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.
NASA Astrophysics Data System (ADS)
Yao, H. J.; Chang, P. Y.
2017-12-01
The Minzu Basin is located at the central part of Taiwan, which is bounded by the Changhua fault in the west and the Chelungpu thrust fault in its east. The Chuoshui river flows through the basin and brings in thick unconsolidated gravel layers deposited over the Pleistocene rocks and gravels. Thus, the area has a great potential for groundwater developments. However, there are not enough observation wells in the study area for a further investigation of groundwater characteristics. Therefore, we tried to use the electrical resistivity imaging(ERI) method for estimating the depth of the groundwater table and the specific yield of the unconfined aquifer in dry and wet seasons. We have deployed 13 survey lines with the Wenner-Schlumberger array in the study area in March and June of 2017. Based on the data from the ERI measurements and the nearby Xinming observation well, we turned the resistivity into the relative saturation with respect to the saturated background based on the Archie's Law. With the depth distribution curve of the relative saturation, we found that the curve exhibits a similar shape to the Soil-Water Characteristic Curve. Hence we attempted to use the Van-Genuchten model for characterizing the depth of the water table. And we also tried to calculated the specific yield by taking the difference between the saturated and residual water contents. According to our preliminary results, we found that the depth of groundwater is ranging from 8-m to 10.7-m and the specific yield is about 0.095 0.146 in March. In addition, the depth of groundwater in June is ranging from about 7.6m to 9.8m and the estimated specific yield is about 0.1 0.157. The average level of groundwater in the wet season of June is raised about 0.6m than that in March. We are now working on collecting more time-lapse data, as well as making the direct comparisons with the data from new observation wells completed recently, in order to verify our estimations from the resistivity surveys.
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.
2013-12-01
Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.
NASA Astrophysics Data System (ADS)
Muralidhara, .; Vasa, Nilesh J.; Singaperumal, M.
2010-02-01
A micro-electro-discharge machine (Micro EDM) was developed incorporating a piezoactuated direct drive tool feed mechanism for micromachining of Silicon using a copper tool. Tool and workpiece materials are removed during Micro EDM process which demand for a tool wear compensation technique to reach the specified depth of machining on the workpiece. An in-situ axial tool wear and machining depth measurement system is developed to investigate axial wear ratio variations with machining depth. Stepwise micromachining experiments on silicon wafer were performed to investigate the variations in the silicon removal and tool wear depths with increase in tool feed. Based on these experimental data, a tool wear compensation method is proposed to reach the desired depth of micromachining on silicon using copper tool. Micromachining experiments are performed with the proposed tool wear compensation method and a maximum workpiece machining depth variation of 6% was observed.
NASA Astrophysics Data System (ADS)
Jones, M.; Longenecker, H. E., III
2017-12-01
The 2017 hurricane season brought the unprecedented landfall of three Category 4 hurricanes (Harvey, Irma and Maria). FEMA is responsible for coordinating the federal response and recovery efforts for large disasters such as these. FEMA depends on timely and accurate depth grids to estimate hazard exposure, model damage assessments, plan flight paths for imagery acquisition, and prioritize response efforts. In order to produce riverine or coastal depth grids based on observed flooding, the methodology requires peak crest water levels at stream gauges, tide gauges, high water marks, and best-available elevation data. Because peak crest data isn't available until the apex of a flooding event and high water marks may take up to several weeks for field teams to collect for a large-scale flooding event, final observed depth grids are not available to FEMA until several days after a flood has begun to subside. Within the last decade NOAA's National Weather Service (NWS) has implemented the Advanced Hydrologic Prediction Service (AHPS), a web-based suite of accurate forecast products that provide hydrograph forecasts at over 3,500 stream gauge locations across the United States. These forecasts have been newly implemented into an automated depth grid script tool, using predicted instead of observed water levels, allowing FEMA access to flood hazard information up to 3 days prior to a flooding event. Water depths are calculated from the AHPS predicted flood stages and are interpolated at 100m spacing along NHD hydrolines within the basin of interest. A water surface elevation raster is generated from these water depths using an Inverse Distance Weighted interpolation. Then, elevation (USGS NED 30m) is subtracted from the water surface elevation raster so that the remaining values represent the depth of predicted flooding above the ground surface. This automated process requires minimal user input and produced forecasted depth grids that were comparable to post-event observed depth grids and remote sensing-derived flood extents for the 2017 hurricane season. These newly available forecasted models were used for pre-event response planning and early estimated hazard exposure counts, allowing FEMA to plan for and stand up operations several days sooner than previously possible.
NASA Astrophysics Data System (ADS)
Herrero, I.; Ezcurra, A.; Areitio, J.; Diaz-Argandoña, J.; Ibarra-Berastegi, G.; Saenz, J.
2013-11-01
Storms developed under local instability conditions are studied in the Spanish Basque region with the aim of establishing precipitation-lightning relationships. Those situations may produce, in some cases, flash flood. Data used correspond to daily rain depth (mm) and the number of CG flashes in the area. Rain and lightning are found to be weakly correlated on a daily basis, a fact that seems related to the existence of opposite gradients in their geographical distribution. Rain anomalies, defined as the difference between observed and estimated rain depth based on CG flashes, are analysed by PCA method. Results show a first EOF explaining 50% of the variability that linearly relates the rain anomalies observed each day and that confirms their spatial structure. Based on those results, a multilinear expression has been developed to estimate the rain accumulated daily in the network based on the CG flashes registered in the area. Moreover, accumulates and maximum values of rain are found to be strongly correlated, therefore making the multilinear expression a useful tool to estimate maximum precipitation during those kind of storms.
A quantile count model of water depth constraints on Cape Sable seaside sparrows
Cade, B.S.; Dong, Q.
2008-01-01
1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.
Quantification of effective plant rooting depth: advancing global hydrological modelling
NASA Astrophysics Data System (ADS)
Yang, Y.; Donohue, R. J.; McVicar, T.
2017-12-01
Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Moreover, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modelling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales and provides improved model outputs when compared to BCP model results from two already existing global Zr datasets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.; Cascone, L.
2012-01-01
We use a multiscale approach as a semi-automated interpreting tool of potential fields. The depth to the source and the structural index are estimated in two steps: first the depth to the source, as the intersection of the field ridges (lines built joining the extrema of the field at various altitudes) and secondly, the structural index by the scale function. We introduce a new criterion, called 'ridge consistency' in this strategy. The criterion is based on the principle that the structural index estimations on all the ridges converging towards the same source should be consistent. If these estimates are significantly different, field differentiation is used to lessen the interference effects from nearby sources or regional fields, to obtain a consistent set of estimates. In our multiscale framework, vertical differentiation is naturally joint to the low-pass filtering properties of the upward continuation, so is a stable process. Before applying our criterion, we studied carefully the errors on upward continuation caused by the finite size of the survey area. To this end, we analysed the complex magnetic synthetic case, known as Bishop model, and evaluated the best extrapolation algorithm and the optimal width of the area extension, needed to obtain accurate upward continuation. Afterwards, we applied the method to the depth estimation of the whole Bishop basement bathymetry. The result is a good reconstruction of the complex basement and of the shape properties of the source at the estimated points.
Robust stereo matching with trinary cross color census and triple image-based refinements
NASA Astrophysics Data System (ADS)
Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr
2017-12-01
For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.
NASA Astrophysics Data System (ADS)
Kotan, Muhammed; Öz, Cemil
2017-12-01
An inspection system using estimated three-dimensional (3-D) surface characteristics information to detect and classify the faults to increase the quality control on the frequently used industrial components is proposed. Shape from shading (SFS) is one of the basic and classic 3-D shape recovery problems in computer vision. In our application, we developed a system using Frankot and Chellappa SFS method based on the minimization of the selected basis function. First, the specialized image acquisition system captured the images of the component. To eliminate noise, wavelet transform is applied to the taken images. Then, estimated gradients were used to obtain depth and surface profiles. Depth information was used to determine and classify the surface defects. Also, a comparison made with some linearization-based SFS algorithms was discussed. The developed system was applied to real products and the results indicated that using SFS approaches is useful and various types of defects can easily be detected in a short period of time.
NASA Astrophysics Data System (ADS)
Schön, Peter; Prokop, Alexander; Naaim-Bouvet, Florence; Vionnet, Vincent; Guyomarc'h, Gilbert; Heiser, Micha; Nishimura, Kouichi
2015-04-01
Wind and the associated snow drift are dominating factors determining the snow distribution and accumulation in alpine areas, resulting in a high spatial variability of snow depth that is difficult to evaluate and quantify. The terrain-based parameter Sx characterizes the degree of shelter or exposure of a grid point provided by the upwind terrain, without the computational complexity of numerical wind field models. The parameter has shown to qualitatively predict snow redistribution with good reproduction of spatial patterns. It does not, however, provide a quantitative estimate of changes in snow depths. The objective of our research was to introduce a new parameter to quantify changes in snow depths in our research area, the Col du Lac Blanc in the French Alps. The area is at an elevation of 2700 m and particularly suited for our study due to its consistently bi-modal wind directions. Our work focused on two pronounced, approximately 10 m high terrain breaks, and we worked with 1 m resolution digital snow surface models (DSM). The DSM and measured changes in snow depths were obtained with high-accuracy terrestrial laser scan (TLS) measurements. First we calculated the terrain-based parameter Sx on a digital snow surface model and correlated Sx with measured changes in snow-depths (Δ SH). Results showed that Δ SH can be approximated by Δ SHestimated = α * Sx, where α is a newly introduced parameter. The parameter α has shown to be linked to the amount of snow deposited influenced by blowing snow flux. At the Col du Lac Blanc test side, blowing snow flux is recorded with snow particle counters (SPC). Snow flux is the number of drifting snow particles per time and area. Hence, the SPC provide data about the duration and intensity of drifting snow events, two important factors not accounted for by the terrain parameter Sx. We analyse how the SPC snow flux data can be used to estimate the magnitude of the new variable parameter α . To simulate the development of the snow surface in dependency of Sx, SPC flux and time, we apply a simple cellular automata system. The system consists of raster cells that develop through discrete time steps according to a set of rules. The rules are based on the states of neighboring cells. Our model assumes snow transport in dependency of Sx gradients between neighboring cells. The cells evolve based on difference quotients between neighbouring cells. Our analyses and results are steps towards using the terrain-based parameter Sx, coupled with SPC data, to quantitatively estimate changes in snow depths, using high raster resolutions of 1 m.
microclim: Global estimates of hourly microclimate based on long-term monthly climate averages
Kearney, Michael R; Isaac, Andrew P; Porter, Warren P
2014-01-01
The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764
Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.
Kearney, Michael R; Isaac, Andrew P; Porter, Warren P
2014-01-01
The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms.
Joint optic disc and cup boundary extraction from monocular fundus images.
Chakravarty, Arunava; Sivaswamy, Jayanthi
2017-08-01
Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
NASA Astrophysics Data System (ADS)
Gorodesky, Niv; Ozana, Nisan; Berg, Yuval; Dolev, Omer; Danan, Yossef; Kotler, Zvi; Zalevsky, Zeev
2016-09-01
We present the first steps of a device suitable for characterization of complex 3D micro-structures. This method is based on an optical approach allowing extraction and separation of high frequency ultrasonic sound waves induced to the analyzed samples. Rapid, non-destructive characterization of 3D micro-structures are limited in terms of geometrical features and optical properties of the sample. We suggest a method which is based on temporal tracking of secondary speckle patterns generated when illuminating a sample with a laser probe while applying known periodic vibration using an ultrasound transmitter. In this paper we investigated lasers drilled through glass vias. The large aspect ratios of the vias possess a challenge for traditional microscopy techniques in analyzing depth and taper profiles of the vias. The correlation of the amplitude vibrations to the vias depths is experimentally demonstrated.
Distribution and abundance of American eels in the White Oak River estuary, North Carolina
Hightower, J.E.; Nesnow, C.
2006-01-01
Apparent widespread declines in abundance of Anguilla rostrata (American eel) have reinforced the need for information regarding its life history and status. We used commercial eel pots and crab (peeler) pots to examine the distribution, condition, and abundance of American eels within the White Oak River estuary, NC, during summers of 2002-2003. Catch of American eels per overnight set was 0.35 (SE = 0.045) in 2002 and 0.49 (SE = 0.044) in 2003. There was not a significant linear relationship between catch per set and depth in 2002 (P = 0.31, depth range 0.9-3.4 m) or 2003 (P = 0.18, depth range 0.6-3.4 m). American eels from the White Oak River were in good condition, based on the slope of a length-weight relationship (3.41) compared to the median slope (3.15) from other systems. Estimates of population density from grid sampling in 2003 (300 mm and larger: 4.0-13.8 per ha) were similar to estimates for the Hudson River estuary, but substantially less than estimates from other (smaller) systems including tidal creeks within estuaries. Density estimates from coastal waters can be used with harvest records to examine whether overfishing has contributed to the recent apparent declines in American eel abundance.
Estimation of wave phase speed and nearshore bathymetry from video imagery
Stockdon, H.F.; Holman, R.A.
2000-01-01
A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.
Stereoscopic perception of real depths at large distances.
Palmisano, Stephen; Gillam, Barbara; Govan, Donovan G; Allison, Robert S; Harris, Julie M
2010-06-01
There has been no direct examination of stereoscopic depth perception at very large observation distances and depths. We measured perceptions of depth magnitude at distances where it is frequently reported without evidence that stereopsis is non-functional. We adapted methods pioneered at distances up to 9 m by R. S. Allison, B. J. Gillam, and E. Vecellio (2009) for use in a 381-m-long railway tunnel. Pairs of Light Emitting Diode (LED) targets were presented either in complete darkness or with the environment lit as far as the nearest LED (the observation distance). We found that binocular, but not monocular, estimates of the depth between pairs of LEDs increased with their physical depths up to the maximum depth separation tested (248 m). Binocular estimates of depth were much larger with a lit foreground than in darkness and increased as the observation distance increased from 20 to 40 m, indicating that binocular disparity can be scaled for much larger distances than previously realized. Since these observation distances were well beyond the range of vertical disparity and oculomotor cues, this scaling must rely on perspective cues. We also ran control experiments at smaller distances, which showed that estimates of depth and distance correlate poorly and that our metric estimation method gives similar results to a comparison method under the same conditions.
Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike
2016-03-01
Radium ((226)Ra) contamination derived from military, industrial, and pharmaceutical products can be found at a number of historical sites across the world posing a risk to human health. The analysis of spectral data derived using gamma-ray spectrometry can offer a powerful tool to rapidly estimate and map the activity, depth, and lateral distribution of (226)Ra contamination covering an extensive area. Subsequently, reliable risk assessments can be developed for individual sites in a fraction of the timeframe compared to traditional labour-intensive sampling techniques: for example soil coring. However, local heterogeneity of the natural background, statistical counting uncertainty, and non-linear source response are confounding problems associated with gamma-ray spectral analysis. This is particularly challenging, when attempting to deal with enhanced concentrations of a naturally occurring radionuclide such as (226)Ra. As a result, conventional surveys tend to attribute the highest activities to the largest total signal received by a detector (Gross counts): an assumption that tends to neglect higher activities at depth. To overcome these limitations, a methodology was developed making use of Monte Carlo simulations, Principal Component Analysis and Machine Learning based algorithms to derive depth and activity estimates for (226)Ra contamination. The approach was applied on spectra taken using two gamma-ray detectors (Lanthanum Bromide and Sodium Iodide), with the aim of identifying an optimised combination of detector and spectral processing routine. It was confirmed that, through a combination of Neural Networks and Lanthanum Bromide, the most accurate depth and activity estimates could be found. The advantage of the method was demonstrated by mapping depth and activity estimates at a case study site in Scotland. There the method identified significantly higher activity (<3 Bq g(-1)) occurring at depth (>0.4m), that conventional gross counting algorithms failed to identify. It was concluded that the method could easily be employed to identify areas of high activity potentially occurring at depth, prior to intrusive investigation using conventional sampling techniques. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Chen; Hao, Huiyan; Jafari, Roozbeh; Kehtarnavaz, Nasser
2017-05-01
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Spatial distribution of eclogite in the Slave cratonic mantle: The role of subduction
NASA Astrophysics Data System (ADS)
Kopylova, Maya G.; Beausoleil, Yvette; Goncharov, Alexey; Burgess, Jennifer; Strand, Pamela
2016-03-01
We reconstructed the spatial distribution of eclogites in the cratonic mantle based on thermobarometry for 240 xenoliths in 4 kimberlite pipes from different parts of the Slave craton (Canada). The accuracy of depth estimates is ensured by the use of a recently calibrated thermometer, projection of temperatures onto well-constrained local peridotitic geotherms, petrological screening for unrealistic temperature estimates, and internal consistency of all data. The depth estimates are based on new data on mineral chemistry and petrography of 148 eclogite xenoliths from the Jericho and Muskox kimberlites of the northern Slave craton and previously reported analyses of 95 eclogites from Diavik and Ekati kimberlites (Central Slave). The majority of Northern Slave eclogites of the crustal, subduction origin occurs at 110-170 km, shallower than in the majority of the Central Slave crustal eclogites (120-210 km). The identical geochronological history of these eclogite populations and the absence of steep suture boundaries between the central and northern Slave craton suggest the lateral continuity of the mantle layer relatively rich in eclogites. We explain the distribution of eclogites by partial preservation of an imbricated and plastically dispersed oceanic slab formed by easterly dipping Proterozoic subduction. The depths of eclogite localization do not correlate with geophysically mapped discontinuities. The base of the depleted lithosphere of the Slave craton constrained by thermobarometry of peridotite xenoliths coincides with the base of the thickened lithospheric slab, which supports contribution of the recycled oceanic lithosphere to formation of the cratonic root. Its architecture may have been protected by circum-cratonic subduction and shielding of the shallow Archean lithosphere from the destructive asthenospheric metasomatism.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
Wave-formed structures and paleoenvironmental reconstruction
Clifton, H.E.; Dingler, J.R.
1984-01-01
Wave-formed sedimentary structures can be powerful interpretive tools because they reflect not only the velocity and direction of the oscillatory currents, but also the length of the horizontal component of orbital motion and the presence of velocity asymmetry within the flow. Several of these aspects can be related through standard wave theories to combinations of wave dimensions and water depth that have definable natural limits. For a particular grain size, threshold of particle movement and that of conversion from a rippled to flat bed indicate flow-velocity limits. The ratio of ripple spacing to grain size provides an estimate of the length of the near-bottom orbital motion. The degree of velocity asymmetry is related to the asymmetry of the bedforms, though it presently cannot be estimated with confidence. A plot of water depth versus wave height (h-H diagram) provides a convenient approach for showing the combination of wave parameters and water depths capable of generating any particular structure in sand of a given grain size. Natural limits on wave height and inferences or assumptions regarding either water depth or wave period based on geologic evidence allow refinement of the paleoenvironmental reconstruction. The assumptions and the degree of approximation involved in the different techniques impose significant constraints. Inferences based on wave-formed structures are most reliable when they are drawn in the context of other evidence such as the association of sedimentary features or progradational sequences. ?? 1984.
Rossa, Carlos; Sloboda, Ron; Usmani, Nawaid; Tavakoli, Mahdi
2016-07-01
This paper proposes a method to predict the deflection of a flexible needle inserted into soft tissue based on the observation of deflection at a single point along the needle shaft. We model the needle-tissue as a discretized structure composed of several virtual, weightless, rigid links connected by virtual helical springs whose stiffness coefficient is found using a pattern search algorithm that only requires the force applied at the needle tip during insertion and the needle deflection measured at an arbitrary insertion depth. Needle tip deflections can then be predicted for different insertion depths. Verification of the proposed method in synthetic and biological tissue shows a deflection estimation error of [Formula: see text]2 mm for images acquired at 35 % or more of the maximum insertion depth, and decreases to 1 mm for images acquired closer to the final insertion depth. We also demonstrate the utility of the model for prostate brachytherapy, where in vivo needle deflection measurements obtained during early stages of insertion are used to predict the needle deflection further along the insertion process. The method can predict needle deflection based on the observation of deflection at a single point. The ultrasound probe can be maintained at the same position during insertion of the needle, which avoids complications of tissue deformation caused by the motion of the ultrasound probe.
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; Christen, J. Andrés; Bennett, K. D.; Reimer, Paula J.
2018-05-01
Reliable chronologies are essential for most Quaternary studies, but little is known about how age-depth model choice, as well as dating density and quality, affect the precision and accuracy of chronologies. A meta-analysis suggests that most existing late-Quaternary studies contain fewer than one date per millennium, and provide millennial-scale precision at best. We use existing and simulated sediment cores to estimate what dating density and quality are required to obtain accurate chronologies at a desired precision. For many sites, a doubling in dating density would significantly improve chronologies and thus their value for reconstructing and interpreting past environmental changes. Commonly used classical age-depth models stop becoming more precise after a minimum dating density is reached, but the precision of Bayesian age-depth models which take advantage of chronological ordering continues to improve with more dates. Our simulations show that classical age-depth models severely underestimate uncertainty and are inaccurate at low dating densities, and also perform poorly at high dating densities. On the other hand, Bayesian age-depth models provide more realistic precision estimates, including at low to average dating densities, and are much more robust against dating scatter and outliers. Indeed, Bayesian age-depth models outperform classical ones at all tested dating densities, qualities and time-scales. We recommend that chronologies should be produced using Bayesian age-depth models taking into account chronological ordering and based on a minimum of 2 dates per millennium.
The Use of an Intra-Articular Depth Guide in the Measurement of Partial Thickness Rotator Cuff Tears
Carroll, Michael J.; More, Kristie D.; Sohmer, Stephen; Nelson, Atiba A.; Sciore, Paul; Boorman, Richard; Hollinshead, Robert; Lo, Ian K. Y.
2013-01-01
Purpose. The purpose of this study was to compare the accuracy of the conventional method for determining the percentage of partial thickness rotator cuff tears to a method using an intra-articular depth guide. The clinical utility of the intra-articular depth guide was also examined. Methods. Partial rotator cuff tears were created in cadaveric shoulders. Exposed footprint, total tendon thickness, and percentage of tendon thickness torn were determined using both techniques. The results from the conventional and intra-articular depth guide methods were correlated with the true anatomic measurements. Thirty-two patients were evaluated in the clinical study. Results. Estimates of total tendon thickness (r = 0.41, P = 0.31) or percentage of thickness tears (r = 0.67, P = 0.07) using the conventional method did not correlate well with true tendon thickness. Using the intra-articular depth guide, estimates of exposed footprint (r = 0.92, P = 0.001), total tendon thickness (r = 0.96, P = 0.0001), and percentage of tendon thickness torn (r = 0.88, P = 0.004) correlated with true anatomic measurements. Seven of 32 patients had their treatment plan altered based on the measurements made by the intra-articular depth guide. Conclusions. The intra-articular depth guide appeared to better correlate with true anatomic measurements. It may be useful during the evaluation and development of treatment plans for partial thickness articular surface rotator cuff tears. PMID:23533789
Wave Period and Coastal Bathymetry Estimations from Satellite Images
NASA Astrophysics Data System (ADS)
Danilo, Celine; Melgani, Farid
2016-08-01
We present an approach for wave period and coastal water depth estimation. The approach based on wave observations, is entirely independent of ancillary data and can theoretically be applied to SAR or optical images. In order to demonstrate its feasibility we apply our method to more than 50 Sentinel-1A images of the Hawaiian Islands, well-known for its long waves. Six wave buoys are available to compare our results with in-situ measurements. The results on Sentinel-1A images show that half of the images were unsuitable for applying the method (no swell or wavelength too small to be captured by the SAR). On the other half, 78% of the estimated wave periods are in accordance with buoy measurements. In addition, we present preliminary results of the estimation of the coastal water depth on a Landsat-8 image (with characteristics close to Sentinel-2A). With a squared correlation coefficient of 0.7 for ground truth measurement, this approach reveals promising results for monitoring coastal bathymetry.
Quantifying the accuracy of snow water equivalent estimates using broadband radar signal phase
NASA Astrophysics Data System (ADS)
Deeb, E. J.; Marshall, H. P.; Lamie, N. J.; Arcone, S. A.
2014-12-01
Radar wave velocity in dry snow depends solely on density. Consequently, ground-based pulsed systems can be used to accurately measure snow depth and snow water equivalent (SWE) using signal travel-time, along with manual depth-probing for signal velocity calibration. Travel-time measurements require a large bandwidth pulse not possible in airborne/space-borne platforms. In addition, radar backscatter from snow cover is sensitive to grain size and to a lesser extent roughness of layers at current/proposed satellite-based frequencies (~ 8 - 18 GHz), complicating inversion for SWE. Therefore, accurate retrievals of SWE still require local calibration due to this sensitivity to microstructure and layering. Conversely, satellite radar interferometry, which senses the difference in signal phase between acquisitions, has shown a potential relationship with SWE at lower frequencies (~ 1 - 5 GHz) because the phase of the snow-refracted signal is sensitive to depth and dielectric properties of the snowpack, as opposed to its microstructure and stratigraphy. We have constructed a lab-based, experimental test bed to quantify the change in radar phase over a wide range of frequencies for varying depths of dry quartz sand, a material dielectrically similar to dry snow. We use a laboratory grade Vector Network Analyzer (0.01 - 25.6 GHz) and a pair of antennae mounted on a trolley over the test bed to measure amplitude and phase repeatedly/accurately at many frequencies. Using ground-based LiDAR instrumentation, we collect a coordinated high-resolution digital surface model (DSM) of the test bed and subsequent depth surfaces with which to compare the radar record of changes in phase. Our plans to transition this methodology to a field deployment during winter 2014-2015 using precision pan/tilt instrumentation will also be presented, as well as applications to airborne and space-borne platforms toward the estimation of SWE at high spatial resolution (on the order of meters) over large regions (> 100 square kilometers).
NASA Astrophysics Data System (ADS)
van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario
2017-11-01
Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.
The Value of the Output and Services Produced by Students While Enrolled in Job Corps.
ERIC Educational Resources Information Center
McConnell, Sheena
The value of the output and services produced by students while enrolled in the Job Corps was estimated by analyzing data from a sample of 2 projects from each of 23 Job Corps centers. The projects were subjected to in-depth analysis based on independent-estimate and relative-productivity approaches. The following were among the key findings: (1)…
We conducted a probability-based sampling of Lake Superior in 2006 and compared the zooplankton biomass estimate with laser optical plankton counter (LOPC) predictions. The net survey consisted of 52 sites stratified across three depth zones (0-30, 30-150, >150 m). The LOPC tow...
A Size-Distance Scaling Demonstration Based on the Holway-Boring Experiment
ERIC Educational Resources Information Center
Gallagher, Shawn P.; Hoefling, Crystal L.
2013-01-01
We explored size-distance scaling with a demonstration based on the classic Holway-Boring experiment. Undergraduate psychology majors estimated the sizes of two glowing paper circles under two conditions. In the first condition, the environment was dark and, with no depth cues available, participants ranked the circles according to their angular…
Mapping snow depth return levels: smooth spatial modeling versus station interpolation
NASA Astrophysics Data System (ADS)
Blanchet, J.; Lehning, M.
2010-12-01
For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.
NASA Astrophysics Data System (ADS)
Ruf, B.; Erdnuess, B.; Weinmann, M.
2017-08-01
With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.
Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José
2015-06-04
In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass production, with several important advantages: low cost, low power needs and a high frame rate (frames per second) when dynamic measurements are required.
Olson, Scott A.; Song, Donald L.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.8 ft. Abutment scour ranged from 6.6 to 14.9 ft. with the worst-case scenario occurring at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1993, p. 48). Many factors, including historical performance during flood events, the geomorphic assessment, scour protection measures, and the results of the hydraulic analyses, must be considered to properly assess the validity of abutment scour results. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein, based on the consideration of additional contributing factors and experienced engineering judgement.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
Influence of aerosol estimation on coastal water products retrieved from HICO images
NASA Astrophysics Data System (ADS)
Patterson, Karen W.; Lamela, Gia
2011-06-01
The Hyperspectral Imager for the Coastal Ocean (HICO) is a hyperspectral sensor which was launched to the International Space Station in September 2009. The Naval Research Laboratory (NRL) has been developing the Coastal Water Signatures Toolkit (CWST) to estimate water depth, bottom type and water column constituents such as chlorophyll, suspended sediments and chromophoric dissolved organic matter from hyperspectral imagery. The CWST uses a look-up table approach, comparing remote sensing reflectance spectra observed in an image to a database of modeled spectra for pre-determined water column constituents, depth and bottom type. In order to successfully use this approach, the remote sensing reflectances must be accurate which implies accurately correcting for the atmospheric contribution to the HICO top of the atmosphere radiances. One tool the NRL is using to atmospherically correct HICO imagery is Correction of Coastal Ocean Atmospheres (COCOA), which is based on Tafkaa 6S. One of the user input parameters to COCOA is aerosol optical depth or aerosol visibility, which can vary rapidly over short distances in coastal waters. Changes to the aerosol thickness results in changes to the magnitude of the remote sensing reflectances. As such, the CWST retrievals for water constituents, depth and bottom type can be expected to vary in like fashion. This work is an illustration of the variability in CWST retrievals due to inaccurate aerosol thickness estimation during atmospheric correction of HICO images.
Enhanced ID Pit Sizing Using Multivariate Regression Algorithm
NASA Astrophysics Data System (ADS)
Krzywosz, Kenji
2007-03-01
EPRI is funding a program to enhance and improve the reliability of inside diameter (ID) pit sizing for balance-of plant heat exchangers, such as condensers and component cooling water heat exchangers. More traditional approaches to ID pit sizing involve the use of frequency-specific amplitude or phase angles. The enhanced multivariate regression algorithm for ID pit depth sizing incorporates three simultaneous input parameters of frequency, amplitude, and phase angle. A set of calibration data sets consisting of machined pits of various rounded and elongated shapes and depths was acquired in the frequency range of 100 kHz to 1 MHz for stainless steel tubing having nominal wall thickness of 0.028 inch. To add noise to the acquired data set, each test sample was rotated and test data acquired at 3, 6, 9, and 12 o'clock positions. The ID pit depths were estimated using a second order and fourth order regression functions by relying on normalized amplitude and phase angle information from multiple frequencies. Due to unique damage morphology associated with the microbiologically-influenced ID pits, it was necessary to modify the elongated calibration standard-based algorithms by relying on the algorithm developed solely from the destructive sectioning results. This paper presents the use of transformed multivariate regression algorithm to estimate ID pit depths and compare the results with the traditional univariate phase angle analysis. Both estimates were then compared with the destructive sectioning results.
Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor
NASA Astrophysics Data System (ADS)
Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.
2018-04-01
RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.
Outgassed water on Mars - Constraints from melt inclusions in SNC meteorites
NASA Technical Reports Server (NTRS)
Mcsween, Harry Y., Jr.; Harvey, Ralph P.
1993-01-01
The SNC (shergottite-nakhlite-chassignite) meteorites, thought to be igneous rocks from Mars, contain melt inclusions trapped at depth in early-formed crystals. Determination of the pre-eruptive water contents of SNC parental magmas from calculations of the solidification histories of these amphibole-bearing inclusions indicates that Martian magmas commonly contained 1.4 percent water by weight. When combined with an estimate of the volume of igneous materials on Mars, this information suggests that the total amount of water outgassed since 3.9 billion years ago corresponds to global depths on the order of 200 meters. This value is significantly higher than previous geochemical estimates but lower than estimates based on erosion by floods. These results imply a wetter Mars interior than has been previously thought and support suggestions of significant outgassing before formation of a stable crust or heterogeneous accretion of a veneer of cometary matter.
From Magma Fracture to a Seismic Magma Flow Meter
NASA Astrophysics Data System (ADS)
Neuberg, J. W.
2007-12-01
Seismic swarms of low-frequency events occur during periods of enhanced volcanic activity and have been related to the flow of magma at depth. Often they precede a dome collapse on volcanoes like Soufriere Hills, Montserrat, or Mt St Helens. This contribution is based on the conceptual model of magma rupture as a trigger mechanism. Several source mechanisms and radiation patterns at the focus of a single event are discussed. We investigate the accelerating event rate and seismic amplitudes during one swarm, as well as over a time period of several swarms. The seismic slip vector will be linked to magma flow parameters resulting in estimates of magma flux for a variety of flow models such as plug flow, parabolic- or friction controlled flow. In this way we try to relate conceptual models to quantitative estimations which could lead to estimations of magma flux at depth from seismic low-frequency signals.
Erosion estimation of guide vane end clearance in hydraulic turbines with sediment water flow
NASA Astrophysics Data System (ADS)
Han, Wei; Kang, Jingbo; Wang, Jie; Peng, Guoyi; Li, Lianyuan; Su, Min
2018-04-01
The end surface of guide vane or head cover is one of the most serious parts of sediment erosion for high-head hydraulic turbines. In order to investigate the relationship between erosion depth of wall surface and the characteristic parameter of erosion, an estimative method including a simplified flow model and a modificatory erosion calculative function is proposed in this paper. The flow between the end surfaces of guide vane and head cover is simplified as a clearance flow around a circular cylinder with a backward facing step. Erosion characteristic parameter of csws3 is calculated with the mixture model for multiphase flow and the renormalization group (RNG) k-𝜀 turbulence model under the actual working conditions, based on which, erosion depths of guide vane and head cover end surfaces are estimated with a modification of erosion coefficient K. The estimation results agree well with the actual situation. It is shown that the estimative method is reasonable for erosion prediction of guide vane and can provide a significant reference to determine the optimal maintenance cycle for hydraulic turbine in the future.
Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui
2016-01-01
The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.
NASA Astrophysics Data System (ADS)
Oyaga Landa, Francisco Javier; Deán-Ben, Xosé Luís.; Montero de Espinosa, Francisco; Razansky, Daniel
2017-03-01
Lack of haptic feedback during laser surgery hampers controlling the incision depth, leading to a high risk of undesired tissue damage. Here we present a new feedback sensing method that accomplishes non-contact realtime monitoring of laser ablation procedures by detecting shock waves emanating from the ablation spot with air-coupled transducers. Experiments in soft and hard tissue samples attained high reproducibity in real-time depth estimation of the laser-induced cuts. The advantages derived from the non-contact nature of the suggested monitoring approach are expected to greatly promote the general applicability of laser-based surgeries.
Updating default depths in the ISC bulletin
NASA Astrophysics Data System (ADS)
Bolton, Maiclaire K.; Storchak, Dmitry A.; Harris, James
2006-09-01
The International Seismological Centre (ISC) publishes the definitive global bulletin of earthquake locations. In the ISC bulletin, we aim to obtain a free depth, but often this is not possible. Subsequently, the first option is to obtain a depth derived from depth phases. If depth phases are not available, we then use the reported depth from a reputable local agency. Finally, as a last resort, we set a default depth. In the past, common depths of 10, 33, or multiples of 50 km have been assigned. Assigning a more meaningful default depth, specific to a seismic region will increase the consistency of earthquake locations within the ISC bulletin and allow the ISC to publish better positions and magnitude estimates. It will also improve the association of reported secondary arrivals to corresponding seismic events. We aim to produce a global set of default depths, based on a typical depth for each area, from well-constrained events in the ISC bulletin or where depth could be constrained using a consistent set of depth phase arrivals provided by a number of different reporters. In certain areas, we must resort to using other assumptions. For these cases, we use a global crustal model (Crust2.0) to set default depths to half the thickness of the crust.
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
NASA Astrophysics Data System (ADS)
Mahindawansha, Amani; Kraft, Philipp; Orlowski, Natalie; Racela, Healthcliff S. U.; Breuer, Lutz
2017-04-01
Rice is one of the most water-consuming crop in the world. Understanding water source utilization of rice-based cropping systems will help to improve water use efficiency (WUE) in paddy management. The objectives of our study were to (1) determine the contributions of various water sources to plant growth in diversified rice-based production systems (wet rice, aerobic rice) (2) investigate water uptake depths at different maturity periods during wet and dry conditions, and (3) calculate WUE of the cropping systems. Our field experiment is based on changes of stable water isotope concentrations in the soil-plant-atmosphere continuum due to transpiration and evaporation. Soil samples were collected together with root sampling from nine different depths under vegetative, reproductive, and matured periods of plant growth together with stem samples. Soil and plant samples were extracted by cryogenic vacuum extraction. Groundwater, surface water, rain, and irrigation water were sampled weekly. All water samples were analyzed for hydrogen and oxygen isotope ratios (δ2H and δ18O) via a laser spectroscope (Los Gatos DLT100). The direct inference approach, which is based on comparing isotopic compositions between plant stem water and soil water, were used to determine water sources taken up by plant. Multiple-source mass balance assessment can provide the estimated range of potential contributions of water from each soil depth to root water uptake of a crop. These estimations were used to determine the proportion of water from upper soil horizons and deep horizons for rice in different maturity periods during wet and dry seasons. Shallow soil water has the higher evaporation than from deeper soil water where the highest evaporation effect is at 5 cm depth (drying front). Water uptake is mostly taking place from surface water in the vegetative and between 5-10 cm in the reproductive period, since roots have grown widely and deeper in the reproductive stage. This will be helpful to understand the WUE and identify the most efficient water management system and the influence of groundwater and surface water during both seasons in rice-based cropping ecosystems by using means of stable water isotope.
Ultrasonography in Acupuncture-Uses in Education and Research.
Leow, Mabel Qi He; Cui, Shu Li; Mohamed Shah, Mohammad Taufik Bin; Cao, Taige; Tay, Shian Chao; Tay, Peter Kay Chai; Ooi, Chin Chin
2017-06-01
This study aims to explore the potential use of ultrasound in locating the second posterior sacral foramen acupuncture point, quantifying depth of insertion and describing surrounding anatomical structures. We performed acupuncture needle insertion on a study team member. There were four steps in our experiment. First, the acupuncturist located the acupuncture point by palpation. Second, we used an ultrasound machine to visualize the structures surrounding the location of the acupuncture point and measure the depth required for needle insertion. Third, the acupuncturist inserted the acupuncture needle into the acupuncture point at an angle of 30°. Fourth, we performed another ultrasound scan to ensure that the needle was in the desired location. Results suggested that ultrasound could be used to locate the acupuncture point and estimate the depth of needle insertion. The needle was inserted to a depth of 4.0 cm to reach the surface of the sacral foramen. Based on Pythagoras theorem, taking a needle insertion angle of 30° and a needle insertion depth of 4.0 cm, the estimated perpendicular depth is 1.8 cm. An ultrasound scan corroborated the depth of 1.85 cm. The use of an ultrasound-guided technique for needle insertion in acupuncture practice could help standardize the treatment. Clinicians and students would be able to visualize and measure the depth of the sacral foramen acupuncture point, to guide the depth of needle insertion. This methodological guide could also be used to create a standard treatment protocol for research. A similar mathematical guide could also be created for other acupuncture points in future. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Akbar, Somaieh; Fathianpour, Nader
2016-12-01
The Curie point depth is of great importance in characterizing geothermal resources. In this study, the Curie iso-depth map was provided using the well-known method of dividing the aeromagnetic dataset into overlapping blocks and analyzing the power spectral density of each block separately. Determining the optimum block dimension is vital in improving the resolution and accuracy of estimating Curie point depth. To investigate the relation between the optimal block size and power spectral density, a forward magnetic modeling was implemented on an artificial prismatic body with specified characteristics. The top, centroid, and bottom depths of the body were estimated by the spectral analysis method for different block dimensions. The result showed that the optimal block size could be considered as the smallest possible block size whose corresponding power spectrum represents an absolute maximum in small wavenumbers. The Curie depth map of the Sabalan geothermal field and its surrounding areas, in the northwestern Iran, was produced using a grid of 37 blocks with different dimensions from 10 × 10 to 50 × 50 km2, which showed at least 50% overlapping with adjacent blocks. The Curie point depth was estimated in the range of 5 to 21 km. The promising areas with the Curie point depths less than 8.5 km are located around Mountain Sabalan encompassing more than 90% of known geothermal resources in the study area. Moreover, the Curie point depth estimated by the improved spectral analysis is in good agreement with the depth calculated from the thermal gradient data measured in one of the exploratory wells in the region.
Extreme precipitation depths for Texas, excluding the Trans-Pecos region
Lanning-Rush, Jennifer; Asquith, William H.; Slade, Raymond M.
1998-01-01
Storm durations of 1, 2, 3, 4, 5, and 6 days were investigated for this report. The extreme precipitation depth for a particular area is estimated from an “extreme precipitation curve” (an upper limit or envelope curve developed from graphs of extreme precipitation depths for each climatic region). The extreme precipitation curves were determined using precipitation depth-duration information from a subset (24 “extreme” storms) of 213 “notable” storms documented throughout Texas. The extreme precipitation curves can be used to estimate extreme precipitation depth for a particular area. The extreme precipitation depth represents a limiting depth, which can provide useful comparative information for more quantitative analyses.
NASA Astrophysics Data System (ADS)
Sanford, Ward E.
2017-03-01
The trend of decreasing permeability with depth was estimated in the fractured-rock terrain of the upper Potomac River basin in the eastern USA using model calibration on 200 water-level observations in wells and 12 base-flow observations in subwatersheds. Results indicate that permeability at the 1-10 km scale (for groundwater flowpaths) decreases by several orders of magnitude within the top 100 m of land surface. This depth range represents the transition from the weathered, fractured regolith into unweathered bedrock. This rate of decline is substantially greater than has been observed by previous investigators that have plotted in situ wellbore measurements versus depth. The difference is that regional water levels give information on kilometer-scale connectivity of the regolith and adjacent fracture networks, whereas in situ measurements give information on near-hole fractures and fracture networks. The approach taken was to calibrate model layer-to-layer ratios of hydraulic conductivity (LLKs) for each major rock type. Most rock types gave optimal LLK values of 40-60, where each layer was twice a thick as the one overlying it. Previous estimates of permeability with depth from deeper data showed less of a decline at <300 m than the regional modeling results. There was less certainty in the modeling results deeper than 200 m and for certain rock types where fewer water-level observations were available. The results have implications for improved understanding of watershed-scale groundwater flow and transport, such as for the timing of the migration of pollutants from the water table to streams.
Robert, Engdah E.; Van Hilst, R. D.; Buland, Raymond P.
1998-01-01
We relocate nearly 100, 000 events that occurred during the period 1964 to 1995 and are well-constrained teleseismically by arrival-time data reported to the International Seismological Centre (ISC) and to the U. S. Geological Survey's National Earthquake Information Center (NEIC). Hypocenter determination is significantly improved by using, in addition to regional and teleseismic P and S phases, the arrival times of PKiKP, PKPdf, and the teleseismic depth phases pP, pwP, and sP in the relocation procedure. A global probability model developed for later-arriving phases is used to independently identify the depth phases. The relocations are compared to hypocenters reported in the ISC and NEIC catalogs and by other sources. Differences in our epicenters with respect to ISC and NEIC estimates are generally small and regionally systematic due to the combined effects of the observing station network and plate geometry regionally, differences in upper mantle travel times between the reference earth models used, and the use of later-arriving phases. Focal depths are improved substantially over most other independent estimates, demonstrating (for example) how regional structures such as downgoing slabs can severely bias depth estimation when only regional and teleseismic P arrivals are used to determine the hypocenter. The new data base, which is complete to about Mw 5. 2 and includes all events for which moment-tensor solutions are available, has immediate application to high-resolution definition of Wadati-Benioff Zones (WBZs) worldwide, regional and global tomographic imaging, and other studies of earth structure.
Sanford, Ward E.
2017-01-01
The trend of decreasing permeability with depth was estimated in the fractured-rock terrain of the upper Potomac River basin in the eastern USA using model calibration on 200 water-level observations in wells and 12 base-flow observations in subwatersheds. Results indicate that permeability at the 1–10 km scale (for groundwater flowpaths) decreases by several orders of magnitude within the top 100 m of land surface. This depth range represents the transition from the weathered, fractured regolith into unweathered bedrock. This rate of decline is substantially greater than has been observed by previous investigators that have plotted in situ wellbore measurements versus depth. The difference is that regional water levels give information on kilometer-scale connectivity of the regolith and adjacent fracture networks, whereas in situ measurements give information on near-hole fractures and fracture networks. The approach taken was to calibrate model layer-to-layer ratios of hydraulic conductivity (LLKs) for each major rock type. Most rock types gave optimal LLK values of 40–60, where each layer was twice a thick as the one overlying it. Previous estimates of permeability with depth from deeper data showed less of a decline at <300 m than the regional modeling results. There was less certainty in the modeling results deeper than 200 m and for certain rock types where fewer water-level observations were available. The results have implications for improved understanding of watershed-scale groundwater flow and transport, such as for the timing of the migration of pollutants from the water table to streams.
NASA Astrophysics Data System (ADS)
Kim, Mijin; Kim, Jhoon; Yoon, Jongmin; Chung, Chu-Yong; Chung, Sung-Rae
2017-04-01
In 2010, the Korean geostationary earth orbit (GEO) satellite, the Communication, Ocean, and Meteorological Satellite (COMS), was launched including the Meteorological Imager (MI). The MI measures atmospheric condition over Northeast Asia (NEA) using a single visible channel centered at 0.675 μm and four IR channels at 3.75, 6.75, 10.8, 12.0 μm. The visible measurement can also be utilized for the retrieval of aerosol optical properties (AOPs). Since the GEO satellite measurement has an advantage for continuous monitoring of AOPs, we can analyze the spatiotemporal variation of the aerosol using the MI observations over NEA. Therefore, we developed an algorithm to retrieve aerosol optical depth (AOD) using the visible observation of MI, and named as MI Yonsei Aerosol Retrieval Algorithm (YAER). In this study, we investigated the accuracy of MI YAER AOD by comparing the values with the long-term products of AERONET sun-photometer. The result showed that the MI AODs were significantly overestimated than the AERONET values over bright surface in low AOD case. Because the MI visible channel centered at red color range, contribution of aerosol signal to the measured reflectance is relatively lower than the surface contribution. Therefore, the AOD error in low AOD case over bright surface can be a fundamental limitation of the algorithm. Meanwhile, an assumption of background aerosol optical depth (BAOD) could result in the retrieval uncertainty, also. To estimate the surface reflectance by considering polluted air condition over the NEA, we estimated the BAOD from the MODIS dark target (DT) aerosol products by pixel. The satellite-based AOD retrieval, however, largely depends on the accuracy of the surface reflectance estimation especially in low AOD case, and thus, the BAOD could include the uncertainty in surface reflectance estimation of the satellite-based retrieval. Therefore, we re-estimated the BAOD using the ground-based sun-photometer measurement, and investigated the effects of the BAOD assumption. The satellite-based BAOD was significantly higher than the ground-based value over urban area, and thus, resulted in the underestimation of surface reflectance and the overestimation of AOD. The error analysis of the MI AOD also showed sensitivity to cloud contamination, clearly. Therefore, improvements of cloud masking process in the developed single channel MI algorithm as well as the modification of the surface reflectance estimation will be required for the future study.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
NASA Astrophysics Data System (ADS)
Zhu, Q.; Xu, Y. P.; Gu, H.
2014-12-01
Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management and risk management.
Regional ground-water evapotranspiration and ground-water budgets, Great Basin, Nevada
Nichols, William D.
2000-01-01
PART A: Ground-water evapotranspiration data from five sites in Nevada and seven sites in Owens Valley, California, were used to develop equations for estimating ground-water evapotranspiration as a function of phreatophyte plant cover or as a function of the depth to ground water. Equations are given for estimating mean daily seasonal and annual ground-water evapotranspiration. The equations that estimate ground-water evapotranspiration as a function of plant cover can be used to estimate regional-scale ground-water evapotranspiration using vegetation indices derived from satellite data for areas where the depth to ground water is poorly known. Equations that estimate ground-water evapotranspiration as a function of the depth to ground water can be used where the depth to ground water is known, but for which information on plant cover is lacking. PART B: Previous ground-water studies estimated groundwater evapotranspiration by phreatophytes and bare soil in Nevada on the basis of results of field studies published in 1912 and 1932. More recent studies of evapotranspiration by rangeland phreatophytes, using micrometeorological methods as discussed in Chapter A of this report, provide new data on which to base estimates of ground-water evapotranspiration. An approach correlating ground-water evapotranspiration with plant cover is used in conjunction with a modified soil-adjusted vegetation index derived from Landsat data to develop a method for estimating the magnitude and distribution of ground-water evapotranspiration at a regional scale. Large areas of phreatophytes near Duckwater and Lockes in Railroad Valley are believed to subsist on ground water discharged from nearby regional springs. Ground-water evapotranspiration by the Duckwater phreatophytes of about 11,500 acre-feet estimated by the method described in this report compares well with measured discharge of about 13,500 acre-feet from the springs near Duckwater. Measured discharge from springs near Lockes was about 2,400 acre-feet; estimated ground-water evapotranspiration using the proposed method was about 2,450 acre-feet. PART C: Previous estimates of ground-water budgets in Nevada were based on methods and data that now are more than 60 years old. Newer methods, data, and technologies were used in the present study to estimate ground-water recharge from precipitation and ground-water discharge by evapotranspiration by phreatophytes for 16 contiguous valleys in eastern Nevada. Annual ground-water recharge to these valleys was estimated to be about 855,000 acre-feet and annual ground-water evapotranspiration was estimated to be about 790,000 acrefeet; both are a little more than two times greater than previous estimates. The imbalance of recharge over evapotranspiration represents recharge that either (1) leaves the area as interbasin flow or (2) is derived from precipitation that falls on terrain within the topographic boundary of the study area but contributes to discharge from hydrologic systems that lie outside these topographic limits. A vegetation index derived from Landsat-satellite data was used to estimate phreatophyte plant cover on the floors of the 16 valleys. The estimated phreatophyte plant cover then was used to estimate annual ground-water evapotranspiration. Detailed estimates of summer, winter, and annual ground-water evapotranspiration for areas with different ranges of phreatophyte plant cover were prepared for each valley. The estimated ground-water discharge from 15 valleys, combined with independent estimates of interbasin ground-water flow into or from a valley, were used to calculate the percentage of recharge derived from precipitation within the topographic boundary of each valley. These percentages then were used to estimate ground-water recharge from precipitation within each valley. Ground-water budgets for all 16 valleys were based on the estimated recharge from precipitation and estimated evapotranspiration. Any imba
Pausch, Roman C.; Grote, Edmund E.; Dawson, Todd E.
2000-03-01
Accurate estimates of sapwood properties (including radial depth of functional xylem and wood water content) are critical when using the heat pulse velocity (HPV) technique to estimate tree water use. Errors in estimating the volumetric water content (V(h)) of the sapwood, especially in tree species with a large proportion of sapwood, can cause significant errors in the calculations ofsap velocity and sap flow through tree boles. Scaling to the whole-stand level greatly inflates these errors. We determined the effects of season, tree size and radial wood depth on V(h) of wood cores removed from Acer saccharum Marsh. trees throughout 3 years in upstate New York. We also determined the effects of variation in V(h) on sap velocity and sap flow calculations based on HPV data collected from sap flow gauges inserted at four depths. In addition, we compared two modifications of Hatton's weighted average technique, the zero-step and zero-average methods, for determining sap velocity and sap flow at depths beyond those penetrated by the sap flow gauges. Parameter V(h) varied significantly with time of year (DOY), tree size (S), and radial wood depth (RD), and there were significant DOY x S and DOY x RD interactions. Use of a mean whole-tree V(h) value resulted in differences ranging from -6 to +47% for both sap velocity and sap flow for individual sapwood annuli compared with use of the V(h) value determined at the specific depth where a probe was placed. Whole-tree sap flow was 7% higher when calculated on the basis of the individual V(h) value compared with the mean whole-tree V(h) value. Calculated total sap flow for a tree with a DBH of 48.8 cm was 13 and 19% less using the zero-step and the zero-average velocity techniques, respectively, than the value obtained with Hatton's weighted average technique. Smaller differences among the three methods were observed for a tree with a DBH of 24.4 cm. We conclude that, for Acer saccharum: (1) mean V(h) changes significantly during the year and can range from nearly 50% during winter and early spring, to 20% during the growing season;(2) large trees have a significantly greater V(h) than small trees; (3) overall, V(h) decreases and then increases significantly with radial wood depth, suggesting that radial water movement and storage are highly dynamic; and (4) V(h) estimates can vary greatly and influence subsequent water use calculations depending on whether an average or an individual V(h) value for a wood core is used. For large diameter trees in which sapwood comprises a large fraction of total stem cross-sectional area (where sap flow gauges cannot be inserted across the entire cross-sectional area), the zero-average modification of Hatton's weighted average method reduces the potential for large errors in whole-tree and landscape water balance estimates based on the HPV method.
Restoration of distorted depth maps calculated from stereo sequences
NASA Technical Reports Server (NTRS)
Damour, Kevin; Kaufman, Howard
1991-01-01
A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.
Estimation of optimal nasotracheal tube depth in adult patients.
Ji, Sung-Mi
2017-12-01
The aim of this study was to estimate the optimal depth of nasotracheal tube placement. We enrolled 110 patients scheduled to undergo oral and maxillofacial surgery, requiring nasotracheal intubation. After intubation, the depth of tube insertion was measured. The neck circumference and distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch were measured. To estimate optimal tube depth, correlation and regression analyses were performed using clinical and anthropometric parameters. The mean tube depth was 28.9 ± 1.3 cm in men (n = 62), and 26.6 ± 1.5 cm in women (n = 48). Tube depth significantly correlated with height (r = 0.735, P < 0.001). Distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch correlated with depth of the endotracheal tube (r = 0.363, r = 0.362, and r = 0.546, P < 0.05). The tube depth also correlated with the sum of these distances (r = 0.646, P < 0.001). We devised the following formula for estimating tube depth: 19.856 + 0.267 × sum of the three distances (R 2 = 0.432, P < 0.001). The optimal tube depth for nasotracheally intubated adult patients correlated with height and sum of the distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch. The proposed equation would be a useful guide to determine optimal nasotracheal tube placement.
Subsurface damage in some single crystalline optical materials.
Randi, Joseph A; Lambropoulos, John C; Jacobs, Stephen D
2005-04-20
We present a nondestructive method for estimating the depth of subsurface damage (SSD) in some single crystalline optical materials (silicon, lithium niobate, calcium fluoride, magnesium fluoride, and sapphire); the method is established by correlating surface microroughness measurements, specifically, the peak-to-valley (p-v) microroughness, to the depth of SSD found by a novel destructive method. Previous methods for directly determining the depth of SSD may be insufficient when applied to single crystals that are very soft or very hard. Our novel destructive technique uses magnetorheological finishing to polish spots onto a ground surface. We find that p-v surface microroughness, appropriately scaled, gives an upper bound to SSD. Our data suggest that SSD in the single crystalline optical materials included in our study (deterministically microground, lapped, and sawed) is always less than 1.4 times the p-v surface microroughness found by white-light interferometry. We also discuss another way of estimating SSD based on the abrasive size used.
Global distribution of plant-extractable water capacity of soil
Dunne, K.A.; Willmott, C.J.
1996-01-01
Plant-extractable water capacity of soil is the amount of water that can be extracted from the soil to fulfill evapotranspiration demands. It is often assumed to be spatially invariant in large-scale computations of the soil-water balance. Empirical evidence, however, suggests that this assumption is incorrect. In this paper, we estimate the global distribution of the plant-extractable water capacity of soil. A representative soil profile, characterized by horizon (layer) particle size data and thickness, was created for each soil unit mapped by FAO (Food and Agriculture Organization of the United Nations)/Unesco. Soil organic matter was estimated empirically from climate data. Plant rooting depths and ground coverages were obtained from a vegetation characteristic data set. At each 0.5?? ?? 0.5?? grid cell where vegetation is present, unit available water capacity (cm water per cm soil) was estimated from the sand, clay, and organic content of each profile horizon, and integrated over horizon thickness. Summation of the integrated values over the lesser of profile depth and root depth produced an estimate of the plant-extractable water capacity of soil. The global average of the estimated plant-extractable water capacities of soil is 8??6 cm (Greenland, Antarctica and bare soil areas excluded). Estimates are less than 5, 10 and 15 cm - over approximately 30, 60, and 89 per cent of the area, respectively. Estimates reflect the combined effects of soil texture, soil organic content, and plant root depth or profile depth. The most influential and uncertain parameter is the depth over which the plant-extractable water capacity of soil is computed, which is usually limited by root depth. Soil texture exerts a lesser, but still substantial, influence. Organic content, except where concentrations are very high, has relatively little effect.
Application of simple all-sky imagers for the estimation of aerosol optical depth
NASA Astrophysics Data System (ADS)
Kazantzidis, Andreas; Tzoumanikas, Panagiotis; Nikitidou, Efterpi; Salamalikis, Vasileios; Wilbert, Stefan; Prahl, Christoph
2017-06-01
Aerosol optical depth is a key atmospheric constituent for direct normal irradiance calculations at concentrating solar power plants. However, aerosol optical depth is typically not measured at the solar plants for financial reasons. With the recent introduction of all-sky imagers for the nowcasting of direct normal irradiance at the plants a new instrument is available which can be used for the determination of aerosol optical depth at different wavelengths. In this study, we are based on Red, Green and Blue intensities/radiances and calculations of the saturated area around the Sun, both derived from all-sky images taken with a low-cost surveillance camera at the Plataforma Solar de Almeria, Spain. The aerosol optical depth at 440, 500 and 675nm is calculated. The results are compared with collocated aerosol optical measurements and the mean/median difference and standard deviation are less than 0.01 and 0.03 respectively at all wavelengths.
NASA Astrophysics Data System (ADS)
Kim, G.; Che, I. Y.
2017-12-01
We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.
Generalized scaling of seasonal thermal stratification in lakes
NASA Astrophysics Data System (ADS)
Shatwell, T.; Kirillin, G.
2016-12-01
The mixing regime is fundamental to the biogeochemisty and ecology of lakes because it determines the vertical transport of matter such as gases, nutrients, and organic material. Whereas shallow lakes are usually polymictic and regularly mix to the bottom, deep lakes tend to stratify seasonally, separating surface water from deep sediments and deep water from the atmosphere. Although empirical relationships exist to predict the mixing regime, a physically based, quantitative criterion is lacking. Here we review our recent research on thermal stratification in lakes at the transition between polymictic and stratified regimes. Using the mechanistic balance between potential and kinetic energy in terms of the Richardson number, we derive a generalized physical scaling for seasonal stratification in a closed lake basin. The scaling parameter is the critical mean basin depth that delineates polymictic and seasonally stratified lakes based on lake water transparency (Secchi depth), lake length, and an annual mean estimate for the Monin-Obukhov length. We validated the scaling on available data of 374 global lakes using logistic regression and found it to perform better than other criteria including a conventional open basin scaling or a simple depth threshold. The scaling has potential applications in estimating large scale greenhouse gas fluxes from lakes because the required inputs, like water transparency and basin morphology, can be acquired using the latest remote sensing technologies. The generalized scaling is universal for freshwater lakes and allows the seasonal mixing regime to be estimated without numerically solving the heat transport equations.
NASA Astrophysics Data System (ADS)
Kourgialas, N. N.; Karatzas, G. P.
2014-03-01
A modeling system for the estimation of flash flood flow velocity and sediment transport is developed in this study. The system comprises three components: (a) a modeling framework based on the hydrological model HSPF, (b) the hydrodynamic module of the hydraulic model MIKE 11 (quasi-2-D), and (c) the advection-dispersion module of MIKE 11 as a sediment transport model. An important parameter in hydraulic modeling is the Manning's coefficient, an indicator of the channel resistance which is directly dependent on riparian vegetation changes. Riparian vegetation's effect on flood propagation parameters such as water depth (inundation), discharge, flow velocity, and sediment transport load is investigated in this study. Based on the obtained results, when the weed-cutting percentage is increased, the flood wave depth decreases while flow discharge, velocity and sediment transport load increase. The proposed modeling system is used to evaluate and illustrate the flood hazard for different riparian vegetation cutting scenarios. For the estimation of flood hazard, a combination of the flood propagation characteristics of water depth, flow velocity and sediment load was used. Next, a well-balanced selection of the most appropriate agricultural cutting practices of riparian vegetation was performed. Ultimately, the model results obtained for different agricultural cutting practice scenarios can be employed to create flood protection measures for flood-prone areas. The proposed methodology was applied to the downstream part of a small Mediterranean river basin in Crete, Greece.
A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES
Satellite data provide new opportunities to study the regional distribution of particulate matter. The aerosol optical depth (AOD) - a derived estimate from the satellite measured irradiance, can be compared against model derived estimate to provide an evaluation of the columnar ...
van Tulder, Raphael; Laggner, Roberta; Kienbacher, Calvin; Schmid, Bernhard; Zajicek, Andreas; Haidvogel, Jochen; Sebald, Dieter; Laggner, Anton N; Herkner, Harald; Sterz, Fritz; Eisenburger, Philip
2015-04-01
In CPR, sufficient compression depth is essential. The American Heart Association ("at least 5cm", AHA-R) and the European Resuscitation Council ("at least 5cm, but not to exceed 6cm", ERC-R) recommendations differ, and both are hardly achieved. This study aims to investigate the effects of differing target depth instructions on compression depth performances of professional and lay-rescuers. 110 professional-rescuers and 110 lay-rescuers were randomized (1:1, 4 groups) to estimate the AHA-R or ERC-R on a paper sheet (given horizontal axis) using a pencil and to perform chest compressions according to AHA-R or ERC-R on a manikin. Distance estimation and compression depth were the outcome variables. Professional-rescuers estimated the distance according to AHA-R in 19/55 (34.5%) and to ERC-R in 20/55 (36.4%) cases (p=0.84). Professional-rescuers achieved correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 36/55 (65.4%) cases (p=0.97). Lay-rescuers estimated the distance correctly according to AHA-R in 18/55 (32.7%) and to ERC-R in 20/55 (36.4%) cases (p=0.59). Lay-rescuers yielded correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 26/55 (47.3%) cases (p=0.02). Professional and lay-rescuers have severe difficulties in correctly estimating distance on a sheet of paper. Professional-rescuers are able to yield AHA-R and ERC-R targets likewise. In lay-rescuers AHA-R was associated with significantly higher success rates. The inability to estimate distance could explain the failure to appropriately perform chest compressions. For teaching lay-rescuers, the AHA-R with no upper limit of compression depth might be preferable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Initial Everglades Depth Estimation Network (EDEN) Digital Elevation Model Research and Development
Jones, John W.; Price, Susan D.
2007-01-01
Introduction The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). To produce historic and near-real time maps of water depths, the EDEN requires a system-wide digital elevation model (DEM) of the ground surface. Accurate Everglades wetland ground surface elevation data were non-existent before the U.S. Geological Survey (USGS) undertook the collection of highly accurate surface elevations at the regional scale. These form the foundation for EDEN DEM development. This development process is iterative as additional high accuracy elevation data (HAED) are collected, water surfacing algorithms improve, and additional ground-based ancillary data become available. Models are tested using withheld HAED and independently measured water depth data, and by using DEM data in EDEN adaptive management applications. Here the collection of HAED is briefly described before the approach to DEM development and the current EDEN DEM are detailed. Finally future research directions for continued model development, testing, and refinement are provided.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Discontinuities in the shallow Martian crust at Lunae, Syria, and Sinai Plana
Davis, P.A.; Golombek, M.P.
1990-01-01
Detailed photoclinometric profiles across 125 erosional features and 141 grabens in the western equatorial region of Mars indicate the presence of three discontinuities within the shallow crust, at depths of 0.3, 0.6 km, 1 km, and 2-3 km. The shallowest discontinuity corresponds to thickness estimates for the ridged plains unit in this region, and thus the discontinuity probably is the contact between a sequence of layered rock making up this unit and the underlying megaregolith. The 1-km discontinuity is reflected in the base levels of erosion of all the features studied, and it may correspond to the base of the proposed layer of ground ice. Model calculations show that graben-bounding faults consistently intersect at the mechanical discontinuity at about 1 km depth. This discontinuity may represent an interface between ice-laden and dry regolith, ice-laden and water-laden regolith, or pristine and cemented regolith. A correlation between wall valley head depth and local thickness of the faulted layer suggests that the 1-km discontinuity also controlled the depth of the heads of sapping canyons. The third discontinuity, at a depth of 2-3 km, corresponds to the proposed base of the Martian megaregolith and is probably the interface between overlying, ejected breccia and in situ, fractured basement rocks. -from Authors
NASA Astrophysics Data System (ADS)
Sawazaki, K.; Saito, T.; Ueno, T.; Shiomi, K.
2015-12-01
In this study, utilizing depth-sensitivity of interferometric waveforms recorded by co-located Hi-net and KiK-net sensors, we separate the responsible depth of seismic velocity change associated with the M6.3 earthquake occurred on November 22, 2014, in central Japan. The Hi-net station N.MKGH is located about 20 km northeast from the epicenter, where the seismometer is installed at the 150 m depth. At the same site, the KiK-net has two strong motion seismometers installed at the depths of 0 and 150 m. To estimate average velocity change around the N.MKGH station, we apply the stretching technique to auto-correlation function (ACF) of ambient noise recorded by the Hi-net sensor. To evaluate sensitivity of the Hi-net ACF to velocity change above and below the 150 m depth, we perform a numerical wave propagation simulation using 2-D FDM. To obtain velocity change above the 150 m depth, we measure response waveform from the depths of 150 m to 0 m by computing deconvolution function (DCF) of earthquake records obtained by the two KiK-net vertical array sensors. The background annual velocity variation is subtracted from the detected velocity change. From the KiK-net DCF records, the velocity reduction ratio above the 150 m depth is estimated to be 4.2 % and 3.1 % in the periods of 1-7 days and 7 days - 4 months after the mainshock, respectively. From the Hi-net ACF records, the velocity reduction ratio is estimated to be 2.2 % and 1.8 % in the same time periods, respectively. This difference in the estimated velocity reduction ratio is attributed to depth-dependence of the velocity change. By using the depth sensitivity obtained from the numerical simulation, we estimate the velocity reduction ratio below the 150 m depth to be lower than 1.0 % for both time periods. Thus the significant velocity reduction and recovery are observed above the 150 m depth only, which may be caused by strong ground motion of the mainshock and following healing in the shallow ground.
Accuracy and robustness evaluation in stereo matching
NASA Astrophysics Data System (ADS)
Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian
2016-09-01
Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.
NASA Astrophysics Data System (ADS)
Nammi, Srinagalakshmi; Vasa, Nilesh J.; Gurusamy, Balaganesan; Mathur, Anil C.
2017-09-01
A plasma shielding phenomenon and its influence on micromachining is studied experimentally and theoretically for laser wavelengths of 355 nm, 532 nm and 1064 nm. A time resolved pump-probe technique is proposed and demonstrated by splitting a single nanosecond Nd3+:YAG laser into an ablation laser (pump laser) and a probe laser to understand the influence of plasma shielding on laser ablation of copper (Cu) clad on polyimide thin films. The proposed nanosecond pump-probe technique allows simultaneous measurement of the absorption characteristics of plasma produced during Cu film ablation by the pump laser. Experimental measurements of the probe intensity distinctly show that the absorption by the ablated plume increases with increase in the pump intensity, as a result of plasma shielding. Theoretical estimation of the intensity of the transmitted pump beam based on the thermo-temporal modeling is in qualitative agreement with the pump-probe based experimental measurements. The theoretical estimate of the depth attained for a single pulse with high pump intensity value on a Cu thin film is limited by the plasma shielding of the incident laser beam, similar to that observed experimentally. Further, the depth of micro-channels produced shows a similar trend for all three wavelengths, however, the channel depth achieved is lesser at the wavelength of 1064 nm.
Sampling for Soil Carbon Stock Assessment in Rocky Agricultural Soils
NASA Technical Reports Server (NTRS)
Beem-Miller, Jeffrey P.; Kong, Angela Y. Y.; Ogle, Stephen; Wolfe, David
2016-01-01
Coring methods commonly employed in soil organic C (SOC) stock assessment may not accurately capture soil rock fragment (RF) content or soil bulk density (rho (sub b)) in rocky agricultural soils, potentially biasing SOC stock estimates. Quantitative pits are considered less biased than coring methods but are invasive and often cost-prohibitive. We compared fixed-depth and mass-based estimates of SOC stocks (0.3-meters depth) for hammer, hydraulic push, and rotary coring methods relative to quantitative pits at four agricultural sites ranging in RF content from less than 0.01 to 0.24 cubic meters per cubic meter. Sampling costs were also compared. Coring methods significantly underestimated RF content at all rocky sites, but significant differences (p is less than 0.05) in SOC stocks between pits and corers were only found with the hammer method using the fixed-depth approach at the less than 0.01 cubic meters per cubic meter RF site (pit, 5.80 kilograms C per square meter; hammer, 4.74 kilograms C per square meter) and at the 0.14 cubic meters per cubic meter RF site (pit, 8.81 kilograms C per square meter; hammer, 6.71 kilograms C per square meter). The hammer corer also underestimated rho (sub b) at all sites as did the hydraulic push corer at the 0.21 cubic meters per cubic meter RF site. No significant differences in mass-based SOC stock estimates were observed between pits and corers. Our results indicate that (i) calculating SOC stocks on a mass basis can overcome biases in RF and rho (sub b) estimates introduced by sampling equipment and (ii) a quantitative pit is the optimal sampling method for establishing reference soil masses, followed by rotary and then hydraulic push corers.
Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations
NASA Astrophysics Data System (ADS)
Merckelbach, Lucas
2016-12-01
Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.
NASA Astrophysics Data System (ADS)
Sadeghi-Goughari, M.; Mojra, A.; Sadeghi, S.
2016-02-01
Intraoperative Thermal Imaging (ITI) is a new minimally invasive diagnosis technique that can potentially locate margins of brain tumor in order to achieve maximum tumor resection with least morbidity. This study introduces a new approach to ITI based on artificial tactile sensing (ATS) technology in conjunction with artificial neural networks (ANN) and feasibility and applicability of this method in diagnosis and localization of brain tumors is investigated. In order to analyze validity and reliability of the proposed method, two simulations were performed. (i) An in vitro experimental setup was designed and fabricated using a resistance heater embedded in agar tissue phantom in order to simulate heat generation by a tumor in the brain tissue; and (ii) A case report patient with parafalcine meningioma was presented to simulate ITI in the neurosurgical procedure. In the case report, both brain and tumor geometries were constructed from MRI data and tumor temperature and depth of location were estimated. For experimental tests, a novel assisted surgery robot was developed to palpate the tissue phantom surface to measure temperature variations and ANN was trained to estimate the simulated tumor’s power and depth. Results affirm that ITI based ATS is a non-invasive method which can be useful to detect, localize and characterize brain tumors.
NASA Astrophysics Data System (ADS)
Huamán Bustamante, Samuel G.; Cavalcanti Pacheco, Marco A.; Lazo Lazo, Juan G.
2018-07-01
The method we propose in this paper seeks to estimate interface displacements among strata related with reflection seismic events, in comparison to the interfaces at other reference points. To do so, we search for reflection events in the reference point of a second seismic trace taken from the same 3D survey and close to a well. However, the nature of the seismic data introduces uncertainty in the results. Therefore, we perform an uncertainty analysis using the standard deviation results from several experiments with cross-correlation of signals. To estimate the displacements of events in depth between two seismic traces, we create a synthetic seismic trace with an empirical wavelet and the sonic log of the well, close to the second seismic trace. Then, we relate the events of the seismic traces to the depth of the sonic log. Finally, we test the method with data from the Namorado Field in Brazil. The results show that the accuracy of the event estimated depth depends on the results of parallel cross-correlation, primarily those from the procedures used in the integration of seismic data with data from the well. The proposed approach can correctly identify several similar events in two seismic traces without requiring all seismic traces between two distant points of interest to correlate strata in the subsurface.
A MODIS-based begetation index climatology
USDA-ARS?s Scientific Manuscript database
Passive microwave soil moisture algorithms must account for vegetation attenuation of the signal in the retrieval process. One approach to accounting for vegetation is to use vegetation indices such as the Normalized Difference Vegetation Index (NDVI) to estimate the vegetation optical depth. The pa...
Helioseismic Constraints on the Depth Dependence of Large-Scale Solar Convection
NASA Astrophysics Data System (ADS)
Woodard, Martin F.
2017-08-01
A recent helioseismic statistical waveform analysis of subsurface flow based on a 720-day time series of SOHO/MDI Medium-l spherical-harmonic coefficients has been extended to cover a greater range of subphotospheric depths. The latest analysis provides estimates of flow-dependent oscillation-mode coupling-strength coefficients b(s,t;n,l) over the range l = 30 to 150 of mode degree (angular wavenumber) for solar p-modes in the approximate frequency range 2 to 4 mHz. The range of penetration depths of this mode set covers most of the solar convection zone. The most recent analysis measures spherical harmonic (s,t) components of the flow velocity for odd s in the angular wavenumber range 1 to 19 for t not much smaller than s at a given s. The odd-s b(s,t;n,l) coefficients are interpreted as averages over depth of the depth-dependent amplitude of one spherical-harmonic (s,t) component of the toroidal part of the flow velocity field. The depth-dependent weighting function defining the average velocity is the fractional kinetic energy density in radius of modes of the (n,l) multiplet. The b coefficients have been converted to estimates of root velocity power as a function of l0 = nu0*l/nu(n,l), which is a measure of mode penetration depth. (nu(n,l) is mode frequency and nu0 is a reference frequency equal to 3 mHz.) A comparison of the observational results with simple convection models will be presented.
A COMPARISON OF AEROSOL OPTICAL DEPTH SIMULATED USING CMAQ WITH SATELLITE ESTIMATES
Satellite data provide new opportunities to study the regional distribution of particulate matter.
The aerosol optical depth (AOD) - a derived estimate from the satellite-measured radiance, can be compared against model estimates to provide an evaluation of the columnar ae...
USDA-ARS?s Scientific Manuscript database
The estimation of parameters of a flow-depth dependent furrow infiltration model and of hydraulic resistance, using irrigation evaluation data, was investigated. The estimated infiltration parameters are the saturated hydraulic conductivity and the macropore volume per unit area. Infiltration throu...
Flood-hazard mapping in Honduras in response to Hurricane Mitch
Mastin, M.C.
2002-01-01
The devastation in Honduras due to flooding from Hurricane Mitch in 1998 prompted the U.S. Agency for International Development, through the U.S. Geological Survey, to develop a country-wide systematic approach of flood-hazard mapping and a demonstration of the method at selected sites as part of a reconstruction effort. The design discharge chosen for flood-hazard mapping was the flood with an average return interval of 50 years, and this selection was based on discussions with the U.S. Agency for International Development and the Honduran Public Works and Transportation Ministry. A regression equation for estimating the 50-year flood discharge using drainage area and annual precipitation as the explanatory variables was developed, based on data from 34 long-term gaging sites. This equation, which has a standard error of prediction of 71.3 percent, was used in a geographic information system to estimate the 50-year flood discharge at any location for any river in the country. The flood-hazard mapping method was demonstrated at 15 selected municipalities. High-resolution digital-elevation models of the floodplain were obtained using an airborne laser-terrain mapping system. Field verification of the digital elevation models showed that the digital-elevation models had mean absolute errors ranging from -0.57 to 0.14 meter in the vertical dimension. From these models, water-surface elevation cross sections were obtained and used in a numerical, one-dimensional, steady-flow stepbackwater model to estimate water-surface profiles corresponding to the 50-year flood discharge. From these water-surface profiles, maps of area and depth of inundation were created at the 13 of the 15 selected municipalities. At La Lima only, the area and depth of inundation of the channel capacity in the city was mapped. At Santa Rose de Aguan, no numerical model was created. The 50-year flood and the maps of area and depth of inundation are based on the estimated 50-year storm tide.
NASA Technical Reports Server (NTRS)
Redemann, Jens; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.; Burton, S.; Livingston, J.;
2014-01-01
We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) measurements for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). We discuss some of the challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed. We also discuss a methodology for using the multi-sensor aerosol retrievals for aerosol type classification based on advanced clustering techniques. The combination of research results permits conclusions regarding the attribution of aerosol radiative forcing to aerosol type.
Sadek, H.S.; Rashad, S.M.; Blank, H.R.
1984-01-01
If proper account is taken of the constraints of the method, it is capable of providing depth estimates to within an accuracy of about 10 percent under suitable circumstances. The estimates are unaffected by source magnetization and are relatively insensitive to assumptions as to source shape or distribution. The validity of the method is demonstrated by analyses of synthetic profiles and profiles recorded over Harrat Rahat, Saudi Arabia, and Diyur, Egypt, where source depths have been proved by drilling.
Improving Snow Modeling by Assimilating Observational Data Collected by Citizen Scientists
NASA Astrophysics Data System (ADS)
Crumley, R. L.; Hill, D. F.; Arendt, A. A.; Wikstrom Jones, K.; Wolken, G. J.; Setiawan, L.
2017-12-01
Modeling seasonal snow pack in alpine environments includes a multiplicity of challenges caused by a lack of spatially extensive and temporally continuous observational datasets. This is partially due to the difficulty of collecting measurements in harsh, remote environments where extreme gradients in topography exist, accompanied by large model domains and inclement weather. Engaging snow enthusiasts, snow professionals, and community members to participate in the process of data collection may address some of these challenges. In this study, we use SnowModel to estimate seasonal snow water equivalence (SWE) in the Thompson Pass region of Alaska while incorporating snow depth measurements collected by citizen scientists. We develop a modeling approach to assimilate hundreds of snow depth measurements from participants in the Community Snow Observations (CSO) project (www.communitysnowobs.org). The CSO project includes a mobile application where participants record and submit geo-located snow depth measurements while working and recreating in the study area. These snow depth measurements are randomly located within the model grid at irregular time intervals over the span of four months in the 2017 water year. This snow depth observation dataset is converted into a SWE dataset by employing an empirically-based, bulk density and SWE estimation method. We then assimilate this data using SnowAssim, a sub-model within SnowModel, to constrain the SWE output by the observed data. Multiple model runs are designed to represent an array of output scenarios during the assimilation process. An effort to present model output uncertainties is included, as well as quantification of the pre- and post-assimilation divergence in modeled SWE. Early results reveal pre-assimilation SWE estimations are consistently greater than the post-assimilation estimations, and the magnitude of divergence increases throughout the snow pack evolution period. This research has implications beyond the Alaskan context because it increases our ability to constrain snow modeling outputs by making use of snow measurements collected by non-expert, citizen scientists.
NASA Astrophysics Data System (ADS)
Schwietzke, S.; Petron, G.; Conley, S. A.; Karion, A.; Tans, P. P.; Wolter, S.; King, C. W.; White, A. B.; Coleman, T.; Bianco, L.; Schnell, R. C.
2016-12-01
Confidence in basin scale oil and gas industry related methane (CH4) emission estimates hinges on an in-depth understanding, objective evaluation, and continued improvements of both top-down (e.g. aircraft measurement based) and bottom-up (e.g. emission inventories using facility- and/or component-level measurements) approaches. Systematic discrepancies of CH4 emission estimates between both approaches in the literature have highlighted research gaps. This paper is part of a more comprehensive study to expand and improve this reconciliation effort for a US dry shale gas play. This presentation will focus on refinements of the aircraft mass balance method to reduce the number of potential methodological biases (e.g. data and methodology). The refinements include (i) an in-depth exploration of the definition of upwind conditions and their impact on calculated downwind CH4 enhancements and total CH4 emissions, (ii) taking into account small but non-zero vertical and horizontal wind gradients in the boundary layer, and (iii) characterizing the spatial distribution of CH4 emissions in the study area using aircraft measurements. For the first time to our knowledge, we apply the aircraft mass balance method to calculate spatially resolved total CH4 emissions for 10 km x 60 km sub-regions within the study area. We identify higher-emitting sub-regions and localize repeating emission patterns as well as differences between days. The increased resolution of the top-down calculation will for the first time allow for an in-depth comparison with a spatially and temporally resolved bottom-up emission estimate based on measurements, concurrent activity data and other data sources.
NASA Astrophysics Data System (ADS)
Foster, K.; Dueker, K.; McClenahan, J.; Hansen, S. M.; Schmandt, B.
2012-12-01
The Transportable Array, with significant supplement from past PASSCAL experiments, provides an unprecedented opportunity for a holistic view over the geologically and tectonically diverse continent. New images from 34,000 Sp Receiver Functions image lithospheric and upper mantle structure that has not previously been well constrained, significant to our understanding of upper mantle processes and continental evolution. The negative velocity gradient (NVG) found beneath the Moho has been elusive and is often loosely termed the "Lithosphere-Asthenosphere Boundary" (LAB).This label is used by some researchers to indicate a rheological boundary, a thermal gradient, an anisotropic velocity contrast, or a compositional boundary, and much confusion has arisen around what observed NVG arrivals manifest. Deconvolution across up to 400 stations simultaneously has enhanced the source wavelet estimation and allowed for more accurate receiver functions. In addition, Sdp converted phases are precursory to the direct S phase arrival, eliminating the issue of contamination from reverberated phases that add noise to Ps receiver functions in this lower-lithospheric and upper mantle depth range. We present taxonomy of the NVG arrivals beneath the Moho across the span of the Transportable Array (125° - 85° W). The NVG is classified into three different categories, primarily distinguished by the estimated temperature at the depth of the arrival. The first species of Sp NVG arrivals is found to be in the region west of the Precambrian rift hinge line, at a depth range of 70 - 90 km, corresponding to a temperature of >1150° C. This temperature and depth is predicted to be supersolidus for a 0.02% weight H2O Peridotite (Katz et al., 2004), supporting the theory that these arrivals are due to a melt-staging area (MSA), which could be correlated with the base of the thermal lithosphere. The current depth estimate of the cratonic US thermal LAB ranges from 150-220 km (Yuan and Romanowitz, 2010), and yet a pervasive arrival in our Sp and Ps images shows a NVG ranging from 80 - 110 km depth, with temperature estimates of ~800° C. Clearly internal to the lithosphere, this signal cannot be a LAB arrival. Hence, our second species of NVG is a Mid-Lithospheric Discontinuity (MLD) that we interpret as a layer of sub-solidus metasomatic minerals that have solidus in the 1000-1100°C range near three Gpa. These low solidus minerals are amphibole, phlogophite, and carbon-bearing phases. A freezing front (solidus) near three Gpa freezing front would concentrate these low velocity minerals to make a metasomatic layer over Ga time-scales to explain our NVG MLD arrivals. A third species of NVG, in the "warm" category of 950-1150° C, exists beneath the intermountain west region of Laramide shortening that extends from Montana to New Mexico. This region has experienced abundant post-Eocene alkaline magmatism. Mantle xenoliths from this region provide temperature at depth measurements which are in agreement with our surface wave velocity based temperature estimates. Thus, this NVG arrival is interpreted as a near to super-solidus metasomatic layer. Noteworthy is that a deeper arrival (150-190 km) is intermittently observed which would be more relative to the base of the thermal lithosphere.
Time-of-flight depth image enhancement using variable integration time
NASA Astrophysics Data System (ADS)
Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong
2013-03-01
Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.
NASA Astrophysics Data System (ADS)
Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.
2007-12-01
An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.
Monocular Depth Perception and Robotic Grasping of Novel Objects
2009-06-01
resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show
Singh, Sukhdip; Wirth, Keith M.; Phelps, Amy L.; Badve, Manasi H.; Shah, Tanmay H.; Vallejo, Manuel C.
2013-01-01
Background. Previously, Balki determined the Pearson correlation coefficient with the use of ultrasound (US) was 0.85 in morbidly obese parturients. We aimed to determine if the use of the epidural depth equation (EDE) in conjunction with US can provide better clinical correlation in estimating the distance from the skin to the epidural space in morbidly obese parturients. Methods. One hundred sixty morbidly obese (≥40 kg/m2) parturients requesting labor epidural analgesia were enrolled. Before epidural catheter placement, EDE was used to estimate depth to the epidural space. This estimation was used to help visualize the epidural space with the transverse and midline longitudinal US views and to measure depth to epidural space. The measured epidural depth was made available to the resident trainee before needle insertion. Actual needle depth (ND) to the epidural space was recorded. Results. Pearson's correlation coefficients comparing actual (ND) versus US estimated depth to the epidural space in the longitudinal median and transverse planes were 0.905 (95% CI: 0.873 to 0.929) and 0.899 (95% CI: 0.865 to 0.925), respectively. Conclusion. Use of the epidural depth equation (EDE) in conjunction with the longitudinal and transverse US views results in better clinical correlation than with the use of US alone. PMID:23983645
NASA Astrophysics Data System (ADS)
Manessa, Masita Dwi Mandini; Kanno, Ariyo; Sagawa, Tatsuyuki; Sekine, Masahiko; Nurdin, Nurjannah
2018-01-01
Lyzenga's multispectral bathymetry formula has attracted considerable interest due to its simplicity. However, there has been little discussion of the effect that variation in optical conditions and bottom types-which commonly appears in coral reef environments-has on this formula's results. The present paper evaluates Lyzenga's multispectral bathymetry formula for a variety of optical conditions and bottom types. A noiseless dataset of above-water remote sensing reflectance from WorldView-2 images over Case-1 shallow coral reef water is simulated using a radiative transfer model. The simulation-based assessment shows that Lyzenga's formula performs robustly, with adequate generality and good accuracy, under a range of conditions. As expected, the influence of bottom type on depth estimation accuracy is far greater than the influence of other optical parameters, i.e., chlorophyll-a concentration and solar zenith angle. Further, based on the simulation dataset, Lyzenga's formula estimates depth when the bottom type is unknown almost as accurately as when the bottom type is known. This study provides a better understanding of Lyzenga's multispectral bathymetry formula under various optical conditions and bottom types.
Stereo Correspondence Using Moment Invariants
NASA Astrophysics Data System (ADS)
Premaratne, Prashan; Safaei, Farzad
Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.
BERG2 Micro-computer Estimation of Freeze and Thaw Depths and Thaw Consolidation (PDF file)
DOT National Transportation Integrated Search
1989-06-01
The BERG2 microcomputer program uses a methology similar to the Modified Berggren method (Aldrich and Paynter, 1953) to estimate the freeze and thaw depths in layered soil systems. The program also provides an estimate of the thaw consolidation in ic...
Kucuker, Mehmet Ali; Guney, Mert; Oral, H Volkan; Copty, Nadim K; Onay, Turgut T
2015-01-01
Land use management is one of the most critical factors influencing soil carbon storage and the global carbon cycle. This study evaluates the impact of land use change on the soil carbon stock in the Karasu region of Turkey which in the last two decades has undergone substantial deforestation to expand hazelnut plantations. Analysis of seasonal soil data indicated that the carbon content decreased rapidly with depth for both land uses. Statistical analyses indicated that the difference between the surface carbon stock (defined over 0-5 cm depth) in agricultural and forested areas is statistically significant (Agricultural = 1.74 kg/m(2), Forested = 2.09 kg/m(2), p = 0.014). On the other hand, the average carbon stocks estimated over the 0-1 m depth were 12.36 and 12.12 kg/m(2) in forested and agricultural soils, respectively. The carbon stock (defined over 1 m depth) in the two land uses were not significantly different which is attributed in part to the negative correlation between carbon stock and bulk density (-0.353, p < 0.01). The soil carbon stock over the entire study area was mapped using a conditional kriging approach which jointly uses the collected soil carbon data and satellite-based land use images. Based on the kriging map, the spatially soil carbon stock (0-1 m dept) ranged about 2 kg/m(2) in highly developed areas to more than 23 kg/m(2) in intensively cultivated areas as well as the averaged soil carbon stock (0-1 m depth) was estimated as 10.4 kg/m(2). Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Wen; Li, Xiongyao; Wei, Guangfei; Wang, Shijie
2016-03-01
Indications of buried lunar bedrock may help us to understand the tectonic evolution of the Moon and provide some clues for formation of lunar regolith. So far, the information on distribution and burial depth of lunar bedrock is far from sufficient. Due to good penetration ability, microwave radiation can be a potential tool to ameliorate this problem. Here, a novel method to estimate the burial depth of lunar bedrock is presented using microwave data from Chang'E-1 (CE-1) lunar satellite. The method is based on the spatial variation of differences in brightness temperatures between 19.35 GHz and 37.0 GHz (ΔTB). Large differences are found in some regions, such as the southwest edge of Oceanus Procellarum, the area between Mare Tranquillitatis and Mare Nectaris, and the highland east of Mare Smythii. Interestingly, a large change of elevation is found in the corresponding region, which might imply a shallow burial depth of lunar bedrock. To verify this deduction, a theoretical model is derived to calculate the ΔTB. Results show that ΔTB varies from 12.7 K to 15 K when the burial depth of bedrock changes from 1 m to 0.5 m in the equatorial region. Based on the available data at low lunar latitude (30°N-30°S), it is thus inferred that the southwest edge of Oceanus Procellarum, the area between Mare Tranquillitatis and Mare Nectaris, the highland located east of Mare Smythii, the edge of Pasteur and Chaplygin are the areas with shallow bedrock, the burial depth is estimated between 0.5 m and 1 m.
NASA Astrophysics Data System (ADS)
Omolaiye, Gabriel Efomeh; Ayolabi, Elijah A.
2010-09-01
A ground penetrating radar (GPR) survey was conducted on the Lekki Peninsula, Lagos State, Nigeria. The primary target of the survey was the delineation of underground septic tanks (ST). A total of four GPR profiles were acquired on the survey site using Ramac X3M GPR equipment with a 250MHz antenna, chosen based on the depth of interest and resolution. An interpretable depth of penetration of 4.5m below the surface was achieved after processing. The method accurately delineated five underground ST. The tops of the ST were easily identified on the radargram based on the strong-amplitude anomalies, the length and the depths to the base of the ST were estimated with 99 and 73 percent confidence respectively. The continuous vertical profiles provide uninterrupted subsurface data along the lines of traverse, while the non-intrusive nature makes it an ideal tool for the accurate mapping and delineation of underground utilities.
NASA Astrophysics Data System (ADS)
Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe
2017-10-01
Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.
Cornick, Leslie A.; Quakenbush, Lori T.; Norman, Stephanie A.; Pasi, Coral; Maslyk, Pamela; Burek, Kathy A.; Goertz, Caroline E. C.; Hobbs, Roderick C.
2016-01-01
Abstract Diving mammals use blubber for a variety of structural and physiological functions, including buoyancy, streamlining, thermoregulation, and energy storage. Estimating blubber stores provides proxies for body condition, nutritional status, and health. Blubber stores may vary topographically within individuals, across seasons, and with age, sex, and reproductive status; therefore, a single full-depth blubber biopsy does not provide an accurate measure of blubber depth, and additional biopsies are limited because they result in open wounds. We examined high-resolution ultrasound as a noninvasive method for assessing blubber stores by sampling blubber depth at 11 locations on beluga whales in Alaska. Blubber mass was estimated as a proportion of body mass (40% from the literature) and compared to a function of volume calculated using ultrasound blubber depth measurements in a truncated cone. Blubber volume was converted to total and mass-specific blubber mass estimates based on the density of beluga blubber. There was no significant difference in mean total blubber mass between the 2 estimates (R2 = 0.88); however, body mass alone predicted only 68% of the variation in mass-specific blubber stores in juveniles, 7% for adults in the fall, and 33% for adults in the spring. Mass-specific blubber stores calculated from ultrasound measurements were highly variable. Adults had significantly greater blubber stores in the fall (0.48±0.02kg/kgMB) than in the spring (0.33±0.02kg/kgMB). There was no seasonal effect in juveniles. High-resolution ultrasound is a more powerful, noninvasive method for assessing blubber stores in wild belugas, allowing for precise measurements at multiple locations. PMID:29899579
Striker, Lora K.; Wild, Emily C.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 1.5. Abutment scour ranged from 8.4 to 15.1 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Flynn, Robert H.; Burns, Ronda L.
1997-01-01
northerly pier) and from 13.5 to 17.1 ft along Pier 2 (southerly pier). The worst case pier scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured -streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Flynn, Robert H.; Burns, Ronda L.
1997-01-01
The computed contraction scour for all modelled flows was 0.0 feet. Abutment scour ranged from 5.3 to 8.2 ft. The worst-case abutment scour occurred at the right abutment for the incipient-overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.
1997-01-01
Contraction scour for all modelled flows was zero. Abutment scour ranged from 7.8 to 10.1 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
NASA Astrophysics Data System (ADS)
Goncharov, A. G.; Ionov, D. A.; Doucet, L. S.; Pokhilenko, L. N.
2012-12-01
Oxygen fugacity (fO2) and temperature variations in a complete lithospheric mantle section (70-220 km) of the central Siberian craton are estimated based on 42 peridotite xenoliths in the Udachnaya kimberlite. Pressure and temperature (P-T) estimates for the 70-140 km depth range closely follow the 40 mW/m2 model conductive geotherm but show a bimodal distribution at greater depths. A subset of coarse garnet peridotites at 145-180 km plots near the "cold" 35 mW/m2 geotherm whereas the majority of coarse and sheared rocks at ≥145 km scatter between the 40 and 45 mW/m2 geotherms. This P-T profile may reflect a perturbation of an initially "cold" lithospheric mantle through a combination of (1) magmatic under-plating close to the crust-mantle boundary and (2) intrusion of melts/fluids in the lower lithosphere accompanied by shearing. fO2 values estimated from Fe3+/∑Fe in spinel and/or garnet obtained by Mössbauer spectroscopy decrease from +1 to -4 Δlog fO2 (FMQ) from the top to the bottom of the lithospheric mantle (˜0.25 log units per 10 km) due to pressure effects on Fe2+-Fe3+ equilibria in garnet. Garnet peridotites from Udachnaya appear to be more oxidized than those from the Kaapvaal craton but show fO2 distribution with depth similar to those in the Slave craton. Published fO2 estimates for Udachnaya xenoliths based on C-O-H fluid speciation in inclusions in minerals from gas chromatography are similar to our results at ≤120 km, but are 1-2 orders of magnitude higher for the deeper mantle, possibly due to uncertainties of fO2 estimates based on experimental calibrations at ≤3.5 GPa. Sheared peridotites containing garnets with u-shaped, sinusoidal and humped REE patterns are usually more oxidized than Yb, Lu-rich, melt-equilibrated garnets, which show a continuous decrease from heavy to light REE. This further indicates that mantle redox state may be related to sources and modes of metasomatism.
Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m
NASA Astrophysics Data System (ADS)
Czarnota, Karol; Gorbatov, Alexei
2016-04-01
In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single seismometer, without a priori knowledge of the surface-wave velocity of the basin material, thereby negating the need to deploy cumbersome arrays.
Indenter flaw geometry and fracture toughness estimates for a glass-ceramic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shetty, D.K.; Duckworth, W.H.; Rosenfield, A.R.
1985-10-01
Shapes of cracks associated with Vickers indenter flaws in a glass-ceramic were assessed by stepwise polishing and measuring surface traces as a function of depth. The cracks were of the Palmqvist type even at 200-N indentation load. The load dependence of crack lengths and fracture toughness estimates were examined in terms of relations proposed for Palmqvist and half-penny cracks. Estimates based on the half-penny crack analogy were in closer agreement with bulk fracture toughness measurements despite the Palmqvist nature of the cracks.
USDA-ARS?s Scientific Manuscript database
Many landscapes are comprised of a variety of vegetation types with different canopy structure, rooting depth, physiological characteristics, including response to environmental stressors, etc. Even in agricultural regions, different management practices, including crop rotations, irrigation schedu...
Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R
2018-05-21
Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Anderson, Kyle; Segall, Paul
2013-01-01
Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.
NASA Astrophysics Data System (ADS)
Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.
2012-12-01
The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.
NASA Astrophysics Data System (ADS)
Buongiorno Nardelli, B.; Guinehut, S.; Verbrugge, N.; Cotroneo, Y.; Zambianchi, E.; Iudicone, D.
2017-12-01
The depth of the upper ocean mixed layer provides fundamental information on the amount of seawater that directly interacts with the atmosphere. Its space-time variability modulates water mass formation and carbon sequestration processes related to both the physical and biological pumps. These processes are particularly relevant in the Southern Ocean, where surface mixed-layer depth estimates are generally obtained either as climatological fields derived from in situ observations or through numerical simulations. Here we demonstrate that weekly observation-based reconstructions can be used to describe the variations of the mixed-layer depth in the upper ocean over a range of space and time scales. We compare and validate four different products obtained by combining satellite measurements of the sea surface temperature, salinity, and dynamic topography and in situ Argo profiles. We also compute an ensemble mean and use the corresponding spread to estimate mixed-layer depth uncertainties and to identify the more reliable products. The analysis points out the advantage of synergistic approaches that include in input the sea surface salinity observations obtained through a multivariate optimal interpolation. Corresponding data allow to assess mixed-layer depth seasonal and interannual variability. Specifically, the maximum correlations between mixed-layer anomalies and the Southern Annular Mode are found at different time lags, related to distinct summer/winter responses in the Antarctic Intermediate Water and Sub-Antarctic Mode Waters main formation areas.
Vapor saturation and accumulation in magmas of the 1989-1990 eruption of Redoubt Volcano, Alaska
Gerlach, Terrance M.; Westrich, Henry R.; Casadevall, Thomas J.; Finnegan, David L.
1994-01-01
The 1989–1990 eruption of Redoubt Volcano, Alaska, provided an opportunity to compare petrologic estimates of SO2 and Cl emissions with estimates of SO2 emissions based on remote sensing data and estimates of Cl emissions based on plume sampling. In this study, we measure the sulfur and chlorine contents of melt inclusions and matrix glasses in the eruption products to determine petrologic estimates of SO2 and Cl emissions. We compare the results with emission estimates based on COSPEC and TOMS data for SO2 and data for Cl/SO2 in plume samples. For the explosive vent clearing period (December 14–22, 1989), the petrologic estimate for SO2 emission is 21,000 tons, or ~12% of a TOMS estimate of 175,000 tons. For the dome growth period (December 22, 1989 to mid-June 1990), the petrologic estimate for SO2 emission is 18,000 tons, or ~3% of COSPEC-based estimates of 572,000–680,000 tons. The petrologic estimates give a total SO2 emission of only 39,000 tons compared to an integrated TOMS/COSPEC emission estimate of ~1,000,000 tons for the whole eruption, including quiescent degassing after mid-June 1990. Petrologic estimates also appear to underestimate Cl emissions, but apparent HCl scavenging in the plume complicates Cl emission comparisons. Several potential sources of ‘excess sulfur’ often invoked to explain petrologic SO2 deficits are concluded to be unlikely for the 1989–1990 Redoubt eruption — e.g., breakdown of sulfides, breakdown of anhydrite, release of SO2 from a hydrothermal system, degassing of commingled infusions of basalt in the magma chamber, and syn-eruptive degassing of sulfur from melt present in non-erupted magma. Leakage and/or diffusion of sulfur from melt inclusions do not provide convincing explanations for the petrologic SO2 deficits either. The main cause of low petrologic estimates for SO2 is that melt inclusions do not represent the total sulfur content of the Redoubt magmas, which were vapor-saturated magmas carrying most of their sulfur in an accumulated vapor phase. Almost all the sulfur of the SO2 emissions was present prior to emission as accumulated magmatic vapor at 6–10 km depth in the magma that supplied the eruption; whole-rock normalized concentrations of gaseous excess S in these magmas remained at ~0.2 wt.% throughout the eruption, equivalent to ~0.7 vol.% at depth. Data for CO2 emissions during the eruption indicate that CO2 at whole-rock concentrations of ~0.6 wt.% in the erupted magma was a key factor in creating the vapor saturation and accumulation condition making a vapor phase source of excess sulfur possible at depth. When explosive volcanism involves magma with accumulated vapor, melt inclusions do not provide a sufficient basis for predicting SO2 emissions. Thus, petrologic estimates made for SO2 emissions during explosive eruptions of the past may be too low and may significantly underestimate impacts on climate and the chemistry of the atmosphere.
Performance of velocity vector estimation using an improved dynamic beamforming setup
NASA Astrophysics Data System (ADS)
Munk, Peter; Jensen, Joergen A.
2001-05-01
Estimation of velocity vectors using transverse spatial modulation has previously been presented. Initially, the velocity estimation was improved using an approximated dynamic beamformer setup instead of a static combined with a new velocity estimation scheme. A new beamformer setup for dynamic control of the acoustic field, based on the Pulsed Plane Wave Decomposition (PPWD), is presented. The PPWD gives an unambiguous relation between a given acoustic field and the time functions needed on an array transducer for transmission. Applying this method for the receive beamformation results in a setup of the beamformer with different filters for each channel for each estimation depth. The method of the PPWD is illustrated by analytical expressions of the decomposed acoustic field and these results are used for simulation. Results of velocity estimates using the new setup are given on the basis of simulated and experimental data. The simulation setup is an attempt to approximate the situation present when performing a scanning of the carotid artery with a linear array. Measurement of the flow perpendicular to the emission direction is possible using the approach of transverse spatial modulation. This is most often the case in a scanning of the carotid artery, where the situation is handled by an angled Doppler setup in the present ultrasound scanners. The modulation period of 2 mm is controlled for a range of 20-40 mm which covers the typical range of the carotid artery. A 6 MHz array on a 128-channel system is simulated. The flow setup in the simulation is based on a vessel with a parabolic flow profile for a 60 and 90-degree flow angle. The experimental results are based on the backscattered signal from a sponge mounted in a stepping device. The bias and std. Dev. Of the velocity estimate are calculated for four different flow angles (50,60,75 and 90 degrees). The velocity vector is calculated using the improved 2D estimation approach at a range of depths.
Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel
2015-01-01
Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315
Tracking of Pacific walruses in the Chukchi Sea using a single hydrophone.
Mouy, Xavier; Hannay, David; Zykov, Mikhail; Martin, Bruce
2012-02-01
The vocal repertoire of Pacific walruses includes underwater sound pulses referred to as knocks and bell-like calls. An extended acoustic monitoring program was performed in summer 2007 over a large region of the eastern Chukchi Sea using autonomous seabed-mounted acoustic recorders. Walrus knocks were identified in many of the recordings and most of these sounds included multiple bottom and surface reflected signals. This paper investigates the use of a localization technique based on relative multipath arrival times (RMATs) for potential behavior studies. First, knocks are detected using a semi-automated kurtosis-based algorithm. Then RMATs are matched to values predicted by a ray-tracing model. Walrus tracks with vertical and horizontal movements were obtained. The tracks included repeated dives between 4.0 m and 15.5 m depth and a deep dive to the sea bottom (53 m). Depths at which bell-like sounds are produced, average knock production rate and source levels estimates of the knocks were determined. Bell sounds were produced at all depths throughout the dives. Average knock production rates varied from 59 to 75 knocks/min. Average source level of the knocks was estimated to 177.6 ± 7.5 dB re 1 μPa peak @ 1 m. © 2012 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Kwon, Seong Kyung; Hyun, Eugin; Lee, Jin-Hee; Lee, Jonghun; Son, Sang Hyuk
2017-11-01
Object detections are critical technologies for the safety of pedestrians and drivers in autonomous vehicles. Above all, occluded pedestrian detection is still a challenging topic. We propose a new detection scheme for occluded pedestrian detection by means of lidar-radar sensor fusion. In the proposed method, the lidar and radar regions of interest (RoIs) have been selected based on the respective sensor measurement. Occluded depth is a new means to determine whether an occluded target exists or not. The occluded depth is a region projected out by expanding the longitudinal distance with maintaining the angle formed by the outermost two end points of the lidar RoI. The occlusion RoI is the overlapped region made by superimposing the radar RoI and the occluded depth. The object within the occlusion RoI is detected by the radar measurement information and the occluded object is estimated as a pedestrian based on human Doppler distribution. Additionally, various experiments are performed in detecting a partially occluded pedestrian in outdoor as well as indoor environments. According to experimental results, the proposed sensor fusion scheme has much better detection performance compared to the case without our proposed method.
Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
Murray, Jessica R.; Minson, Sarah E.; Svarc, Jerry L.
2014-01-01
Fault creep, depending on its rate and spatial extent, is thought to reduce earthquake hazard by releasing tectonic strain aseismically. We use Bayesian inversion and a newly expanded GPS data set to infer the deep slip rates below assigned locking depths on the San Andreas, Maacama, and Bartlett Springs Faults of Northern California and, for the latter two, the spatially variable interseismic creep rate above the locking depth. We estimate deep slip rates of 21.5 ± 0.5, 13.1 ± 0.8, and 7.5 ± 0.7 mm/yr below 16 km, 9 km, and 13 km on the San Andreas, Maacama, and Bartlett Springs Faults, respectively. We infer that on average the Bartlett Springs fault creeps from the Earth's surface to 13 km depth, and below 5 km the creep rate approaches the deep slip rate. This implies that microseismicity may extend below the locking depth; however, we cannot rule out the presence of locked patches in the seismogenic zone that could generate moderate earthquakes. Our estimated Maacama creep rate, while comparable to the inferred deep slip rate at the Earth's surface, decreases with depth, implying a slip deficit exists. The Maacama deep slip rate estimate, 13.1 mm/yr, exceeds long-term geologic slip rate estimates, perhaps due to distributed off-fault strain or the presence of multiple active fault strands. While our creep rate estimates are relatively insensitive to choice of model locking depth, insufficient independent information regarding locking depths is a source of epistemic uncertainty that impacts deep slip rate estimates.
Reevaluation of mid-Pliocene North Atlantic sea surface temperatures
Robinson, Marci M.; Dowsett, Harry J.; Dwyer, Gary S.; Lawrence, Kira T.
2008-01-01
Multiproxy temperature estimation requires careful attention to biological, chemical, physical, temporal, and calibration differences of each proxy and paleothermometry method. We evaluated mid-Pliocene sea surface temperature (SST) estimates from multiple proxies at Deep Sea Drilling Project Holes 552A, 609B, 607, and 606, transecting the North Atlantic Drift. SST estimates derived from faunal assemblages, foraminifer Mg/Ca, and alkenone unsaturation indices showed strong agreement at Holes 552A, 607, and 606 once differences in calibration, depth, and seasonality were addressed. Abundant extinct species and/or an unrecognized productivity signal in the faunal assemblage at Hole 609B resulted in exaggerated faunal-based SST estimates but did not affect alkenone-derived or Mg/Ca–derived estimates. Multiproxy mid-Pliocene North Atlantic SST estimates corroborate previous studies documenting high-latitude mid-Pliocene warmth and refine previous faunal-based estimates affected by environmental factors other than temperature. Multiproxy investigations will aid SST estimation in high-latitude areas sensitive to climate change and currently underrepresented in SST reconstructions.
Legleiter, Carl; Kinzel, Paul J.; Nelson, Jonathan M.
2017-01-01
Although river discharge is a fundamental hydrologic quantity, conventional methods of streamgaging are impractical, expensive, and potentially dangerous in remote locations. This study evaluated the potential for measuring discharge via various forms of remote sensing, primarily thermal imaging of flow velocities but also spectrally-based depth retrieval from passive optical image data. We acquired thermal image time series from bridges spanning five streams in Alaska and observed strong agreement between velocities measured in situ and those inferred by Particle Image Velocimetry (PIV), which quantified advection of thermal features by the flow. The resulting surface velocities were converted to depth-averaged velocities by applying site-specific, calibrated velocity indices. Field spectra from three clear-flowing streams provided strong relationships between depth and reflectance, suggesting that, under favorable conditions, spectrally-based bathymetric mapping could complement thermal PIV in a hybrid approach to remote sensing of river discharge; this strategy would not be applicable to larger, more turbid rivers, however. A more flexible and efficient alternative might involve inferring depth from thermal data based on relationships between depth and integral length scales of turbulent fluctuations in temperature, captured as variations in image brightness. We observed moderately strong correlations for a site-aggregated data set that reduced station-to-station variability but encompassed a broad range of depths. Discharges calculated using thermal PIV-derived velocities were within 15% of in situ measurements when combined with depths measured directly in the field or estimated from field spectra and within 40% when the depth information also was derived from thermal images. The results of this initial, proof-of-concept investigation suggest that remote sensing techniques could facilitate measurement of river discharge.
Structural and Network-based Methods for Knowledge-Based Systems
2011-12-01
depth) provide important information about knowledge gaps in the KB. For example, if SuccessEstimate (causes-EventEvent, Typhoid - Fever , 1, 3) is...equal to 0, it points toward lack of biological knowledge about Typhoid - Fever in our KB. Similar information can also be obtained from the...position of the consequent. ⋃ ( ( ) ) Therefore, if Q does not contain Typhoid - Fever , then obtaining
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha
2013-09-01
Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.
Xia, Xiangao
2015-01-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310
Stratospheric aerosol optical depths, 1850-1990
NASA Technical Reports Server (NTRS)
Sato, Makiko; Hansen, James E.; Mccormick, M. Patrick; Pollack, James B.
1993-01-01
A global stratospheric aerosol database employed for climate simulations is described. For the period 1883-1990, aerosol optical depths are estimated from optical extinction data, whose quality increases with time over that period. For the period 1850-1882, aerosol optical depths are more crudely estimated from volcanological evidence for the volume of ejecta from major known volcanoes. The data set is available over Internet.
NASA Astrophysics Data System (ADS)
Letort, Jean; Guilbert, Jocelyn; Cotton, Fabrice; Bondár, István; Cano, Yoann; Vergoz, Julien
2015-06-01
The depth of an earthquake is difficult to estimate because of the trade-off between depth and origin time estimations, and because it can be biased by lateral Earth heterogeneities. To face this challenge, we have developed a new, blind and fully automatic teleseismic depth analysis. The results of this new method do not depend on epistemic uncertainties due to depth-phase picking and identification. The method consists of a modification of the cepstral analysis from Letort et al. and Bonner et al., which aims to detect surface reflected (pP, sP) waves in a signal at teleseismic distances (30°-90°) through the study of the spectral holes in the shape of the signal spectrum. The ability of our automatic method to improve depth estimations is shown by relocation of the recent moderate seismicity of the Guerrero subduction area (Mexico). We have therefore estimated the depth of 152 events using teleseismic data from the IRIS stations and arrays. One advantage of this method is that it can be applied for single stations (from IRIS) as well as for classical arrays. In the Guerrero area, our new cepstral analysis efficiently clusters event locations and provides an improved view of the geometry of the subduction. Moreover, we have also validated our method through relocation of the same events using the new International Seismological Centre (ISC)-locator algorithm, as well as comparing our cepstral depths with the available Harvard-Centroid Moment Tensor (CMT) solutions and the three available ground thrust (GT5) events (where lateral localization is assumed to be well constrained with uncertainty <5 km) for this area. These comparisons indicate an overestimation of focal depths in the ISC catalogue for deeper parts of the subduction, and they show a systematic bias between the estimated cepstral depths and the ISC-locator depths. Using information from the CMT catalogue relating to the predominant focal mechanism for this area, this bias can be explained as a misidentification of sP phases by pP phases, which shows the greater interest for the use of this new automatic cepstral analysis, as it is less sensitive to phase identification.
Estimated and measured bridge scour at selected sites in North Dakota, 1990-97
Williams-Sether, Tara
1999-01-01
A Level 2 bridge scour method was used to estimate scour depths at 36 selected bridge sites located on the primary road system throughout North Dakota. Of the 36 bridge sites analyzed, the North Dakota Department of Transportation rated 15 as scour critical. Flood and scour data were collected at 19 of the 36 selected bridge sites during 1990-97. Data collected were sufficient to estimate pier scour but not contraction or abutment scour. Estimated pier scour depths ranged from -10.6 to -1.2 feet, and measured bed-elevation changes at piers ranged from -2.31 to +2.37 feet. Comparisons between the estimated pier scour depths and the measured bed-elevation changes indicate that the pier scour equations overestimate scour at bridges in North Dakota.A Level 1.5 bridge scour method also was used to estimate scour depths at 495 bridge sites located on the secondary road system throughout North Dakota. The North Dakota Department of Transportation determined that 26 of the 495 bridge sites analyzed were potentially scour critical.
Estimate of Cosmic Muon Background for Shallow Underground Neutrino Detectors
NASA Astrophysics Data System (ADS)
Casimiro, E.; Simão, F. R. A.; Anjos, J. C.
One of the severe limitations in detecting neutrino signals from nuclear reactors is that the copious cosmic ray background imposes the use of a time veto upon the passage of the muons to reduce the number of fake signals due to muon-induced spallation neutrons. For this reason neutrino detectors are usually located underground, with a large overburden. However there are practical limitations that do restrain from locating the detectors at large depths underground. In order to decide the depth underground at which the Neutrino Angra Detector (currently in preparation) should be installed, an estimate of the cosmogenic background in the detector as a function of the depth is required. We report here a simple analytical estimation of the muon rates in the detector volume for different plausible depths, assuming a simple plain overburden geometry. We extend the calculation to the case of the San Onofre neutrino detector and to the case of the Double Chooz neutrino detector, where other estimates or measurements have been performed. Our estimated rates are consistent.
Center for Seismic Studies Final Technical Report, October 1992 through October 1993
1994-02-07
SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE OF ABSTRACT...Upper limit of depth error as a function of mb for estimates based on P and S waves for three netowrks : GSETr-2, ALPHA, and ALPHA + a 50 station...U 4A 4 U 4S as 1 I I I Figure 42: Upper limit of depth error as a function of mb for estimatesbased on P and S waves for three netowrk : GSETT-2o ALPHA
Sedimentation in Lake Onalaska, Navigation Pool 7, upper Mississippi River, since impoundment
Korschgen, C.E.; Jackson, G.A.; Muessig, L.F.; Southworth, D.C.
1987-01-01
Sediment accumulation was evaluated in Lake Onalaska, a 2800-ha backwater impoundment on the Upper Mississippi River. Computer programs were used to process fathometric charts and generate an extensive data set on water depth for the lake. Comparison of 1983 survey data with pre-impoundment (before 1937) data showed that Lake Onalaska had lost less than 10 percent of its original mean depth in the 46 years since impoundment. Previous estimates of sedimentation rates based on Cesium-137 sediment core analysis appear to have been too high. (DBO)
An improvement approach to the interpretation of magnetic data
NASA Astrophysics Data System (ADS)
Zhang, H. L.; Hu, X. Y.; Liu, T. Y.
2012-04-01
There are numerous existing semi-automated data processing approaches being implemented which specialize in edge and depth of potential field source. The mathematical expression of tilt-angle has recently been developed into a depth-estimation routine, known as "tilt-depth". The tilt-depth was first introduced by Salem et al (2007) based on the tilt-angle which use first-order derivative to detect edge. In this paper, we propose the improvement on the tilt-depth method, which is based on the second-order derivatives of the reduced to pole (RTP) magnetic field, called edge detection and depth estimation based on vertical second-order derivatives (V2D-depth). Under certain assumptions such as when the contacts are nearly vertical and infinite depth extent and the magnetic field is vertical or RTP, the general expression published by Nabighian (1972) for the magnetic field over contacts located at a horizontal location of x=0 and at a depth of z0 is ( ) -x-- ΔT (x,z) = 2kFc·arctan z0 - z (1) Where kis the susceptibility contrast at the contact, F the magnitude of the magnetic field, c = 1 - cos2i · sin2A, A the angle between the positive h-axis and magnetic north, i the inclination of earth's field. The expressions for the vertical and horizontal derivatives of the magnetic field can be written as dΔT-= 2kF c·--z0--z-- dh x2 +(z0 - z)2 (2) dΔT-= 2kF c·--- x-- dz x2 +(z0 - z)2 (3) Based on Equations 2 and 3, we have 2 Tzz = d-ΔT-= 2kF c·--2x(z0--z)- dz2 [x2 + (z0 - z)2]2 (4) 2 2 2 Tzh = d-ΔT-= 2kF c·-(z0 -z)--x-2 dzdh [x2 + (z0 - z)2] (5) ° ---- x2 + (z - z)2 TzG = Tz2h +T 2zz = 2kFc ·----0--2- [x2 + (z0 - z)2] (6) Using Equations 4, 5 and 6, when z=0, we can get Tzz x T--+-T-= z- zG zh 0 (7) The V2D-depth is defined as ( T ) ( x ) θ = tan- 1 --zz-- = tan-1 - TzG + Tzh z0 (8) The V2D-depth amplitudes are restricted to values between -45° and +45° . It has the same interesting properties like the tilt-depth. Its responses vary from negative to positive. Its value is negative when outside the source region, passes through zero when over, or near, the edge, and is positive when over the source. This can not only outline edge but also indicate the relative magnetization contrast. As we know that tilt-depth which use the zero amplitude of first-order vertical derivative for edge detection is not the best. The tilt-depth calculates the depth to top by measuring the physical distance between tilt-angle pairs, with particular emphasis on the locus of the complementary 0° and ±45° pairs. As Ahmed Salem et al pointed out in 2007, because of the anomaly interference and the breakdown of the two dimensionality assumption, the distance between the two ±45° contours and the 0° contours is not everywhere identical around the perimeter of each body. Comparison with the tilt-depth approach, this V2D-depth method can obtain a clearer field source edge and inverse a more realistic depth, while it also overcomes the interference by superimposed anomaly which tilt-depth approach does. The numerical experiment shows the method is effective.
Fabrizio, Mary C.; Adams, Jean V.; Curtis, Gary L.
1997-01-01
The Lake Michigan fish community has been monitored since the 1960s with bottom trawls, and since the late 1980s with acoustics and midwater trawls. These sampling tools are limited to different habitats: bottom trawls sample fish near bottom in areas with smooth substrates, and acoustic methods sample fish throughout the water column above all substrate types. We compared estimates of fish densities and species richness from daytime bottom trawling with those estimated from night-time acoustic and midwater trawling at a range of depths in northeastern Lake Michigan in summer 1995. We examined estimates of total fish density as well as densities of alewife Alosa pseudoharengus (Wilson), bloater Coregonus hoyi (Gill), and rainbow smelt Osmerus mordax (Mitchell) because these three species are the dominant forage of large piscivores in Lake Michigan. In shallow water (18 m), we detected more species but fewer fish (in fish/ha and kg/ha) with bottom trawls than with acoustic-midwater trawling. Large aggregations of rainbow smelt were detected by acoustic-midwater trawling at 18 m and contributed to the differences in total fish density estimates between gears at this depth. Numerical and biomass densitites of bloaters from all depths were significantly higher when based on bottom trawl samples than on acoustic-midwater trawling, and this probably contributed to the observed significant difference between methods for total fish densities (kg/ha) at 55 m. Significantly fewer alewives per ha were estimated from bottom trawling than from acoustics-midwater trawling at 55 m, and in deeper waters, no alewives were taken by bottom trawling. The differences detected between gears resulted from alewife, bloater, and rainbow smelt vertical distributions, which varied with lake depth and time of day. Because Lake Michigan fishes are both demersal and pelagic, a single sampling method cannot be used to completely describe characteristics of the fish community.
Estimation of River Bathymetry from ATI-SAR Data
NASA Astrophysics Data System (ADS)
Almeida, T. G.; Walker, D. T.; Farquharson, G.
2013-12-01
A framework for estimation of river bathymetry from surface velocity observation data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs) including bottom friction. We start with with a cost function defined by the error between observed and estimated surface velocities, and introduce the SWEs as a constraint on the velocity field. The constrained minimization problem is converted to an unconstrained minimization through the use of Lagrange multipliers, and an adjoint SWE model is developed. The adjoint model solution is used to calculate the gradient of the cost function with respect to river bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In applying the algorithm, the 2D depth-averaged flow is computed assuming a known, constant discharge rate and a known, uniform bottom-friction coefficient; a correlation relating surface velocity and depth-averaged velocity is also used. Observation data was collected using a dual beam squinted along-track-interferometric, synthetic-aperture radar (ATI-SAR) system, which provides two independent components of the surface velocity, oriented roughly 30 degrees fore and aft of broadside, offering high-resolution bank-to-bank velocity vector coverage of the river. Data and bathymetry estimation results are presented for two rivers, the Snohomish River near Everett, WA and the upper Sacramento River, north of Colusa, CA. The algorithm results are compared to available measured bathymetry data, with favorable results. General trends show that the water-depth estimates are most accurate in shallow regions, and performance is sensitive to the accuracy of the specified discharge rate and bottom friction coefficient. The results also indicate that, for a given reach, the estimated water depth reaches a maximum that is smaller than the true depth; this apparent maximum depth scales with the true river depth and discharge rate, so that the deepest parts of the river show the largest bathymetry errors.
Noble, J.E.; Bush, P.W.; Kasmarek, M.C.; Barbie, D.L.
1996-01-01
In 1989, the U.S. Geological Survey, in cooperation with the Harris-Galveston Coastal Subsidence District, began a field study to determine the depth to the water table and to estimate the rate of recharge in outcrops of the Chicot and Evangeline aquifers near Houston, Texas. The study area comprises about 2,000 square miles of outcrops of the Chicot and Evangeline aquifers in northwest Harris County, Montgomery County, and southern Walker County. Because of the scarcity of measurable water-table wells, depth to the water table below land surface was estimated using a surface geophysical technique, seismic refraction. The water table in the study area generally ranges from about 10 to 30 foot below land surface and typically is deeper in areas of relatively high land-surface altitude than in areas of relatively low land- surface altitude. The water table has demonstrated no long-term trends since ground-water development began, with the probable exception of the water table in the Katy area: There the water table is more than 75 feet deep, probably due to ground-water pumpage from deeper zones. An estimated rate of recharge in the aquifer outcrops was computed using the interface method in which environmental tritium is a ground-water tracer. The estimated average total recharge rate in the study area is 6 inches per year. This rate is an upper bound on the average recharge rate during the 37 years 1953-90 because it is based on the deepest penetration (about 80 feet) of postnuclear-testing tritium concentrations. The rate, which represents one of several components of a complex regional hydrologic budget, is considered reasonable but is not definitive because of uncertainty regarding the assumptions and parameters used in its computation.
Completing the Feedback Loop: The Impact of Chlorophyll Data Assimilation on the Ocean State
NASA Technical Reports Server (NTRS)
Borovikov, Anna; Keppenne, Christian; Kovach, Robin
2015-01-01
In anticipation of the integration of a full biochemical model into the next generation GMAO coupled system, an intermediate solution has been implemented to estimate the penetration depth (1Kd_PAR) of ocean radiation based on the chlorophyll concentration. The chlorophyll is modeled as a tracer with sources-sinks coming from the assimilation of MODIS chlorophyll data. Two experiments were conducted with the coupled ocean-atmosphere model. In the first, climatological values of Kpar were used. In the second, retrieved daily chlorophyll concentrations were assimilated and Kd_PAR was derived according to Morel et al (2007). No other data was assimilated to isolate the effects of the time-evolving chlorophyll field. The daily MODIS Kd_PAR product was used to validate the skill of the penetration depth estimation and the MERRA-OCEAN re-analysis was used as a benchmark to study the sensitivity of the upper ocean heat content and vertical temperature distribution to the chlorophyll input. In the experiment with daily chlorophyll data assimilation, the penetration depth was estimated more accurately, especially in the tropics. As a result, the temperature bias of the model was reduced. A notably robust albeit small (2-5 percent) improvement was found across the equatorial Pacific ocean, which is a critical region for seasonal to inter-annual prediction.
NASA Astrophysics Data System (ADS)
Qu, W.; Bogena, H. R.; Huisman, J. A.; Martinez, G.; Pachepsky, Y. A.; Vereecken, H.
2013-12-01
Soil water content is a key variable in the soil, vegetation and atmosphere continuum with high spatial and temporal variability. Temporal stability of soil water content (SWC) has been observed in multiple monitoring studies and the quantification of controls on soil moisture variability and temporal stability presents substantial interest. The objective of this work was to assess the effect of soil hydraulic parameters on the temporal stability. The inverse modeling based on large observed time series SWC with in-situ sensor network was used to estimate the van Genuchten-Mualem (VGM) soil hydraulic parameters in a small grassland catchment located in western Germany. For the inverse modeling, the shuffled complex evaluation (SCE) optimization algorithm was coupled with the HYDRUS 1D code. We considered two cases: without and with prior information about the correlation between VGM parameters. The temporal stability of observed SWC was well pronounced at all observation depths. Both the spatial variability of SWC and the robustness of temporal stability increased with depth. Calibrated models both with and without prior information provided reasonable correspondence between simulated and measured time series of SWC. Furthermore, we found a linear relationship between the mean relative difference (MRD) of SWC and the saturated SWC (θs). Also, the logarithm of saturated hydraulic conductivity (Ks), the VGM parameter n and logarithm of α were strongly correlated with the MRD of saturation degree for the prior information case, but no correlation was found for the non-prior information case except at the 50cm depth. Based on these results we propose that establishing relationships between temporal stability and spatial variability of soil properties presents a promising research avenue for a better understanding of the controls on soil moisture variability. Correlation between Mean Relative Difference of soil water content (or saturation degree) and inversely estimated soil hydraulic parameters (log10(Ks), log10(α), n, and θs) at 5-cm, 20-cm and 50-cm depths. Solid circles represent parameters estimated by using prior information; open circles represent parameters estimated without using prior information.
Temperature regime and water/hydroxyl behavior in the crater Boguslawsky on the Moon
NASA Astrophysics Data System (ADS)
Wöhler, Christian; Grumpe, Arne; Berezhnoy, Alexey A.; Feoktistova, Ekaterina A.; Evdokimova, Nadezhda A.; Kapoor, Karan; Shevchenko, Vladislav V.
2017-03-01
In this work we examine the lunar crater Boguslawsky as a typical region of the illuminated southern lunar highlands with regard to its temperature regime and the behavior of the depth of the water/hydroxyl-related spectral absorption band near 3 μm wavelength. For estimating the surface temperature, we compare two different methods, the first of which is based on raytracing and the simulation of heat diffusion in the upper regolith layer, while the second relies on the thermal equilibrium assumption and uses Moon Mineralogy Mapper (M³) spectral reflectance data for estimating the wavelength-dependent thermal emissivity. A method for taking into account the surface roughness in the estimation of the surface temperature is proposed. Both methods yield consistent results that coincide within a few K. By constructing a map of the maximal surface temperatures and comparing with the volatility temperatures of Hg, S, Na, Mg, and Ca, we determine regions in which these volatile species might form stable deposits. Based on M³ data of the crater Boguslawsky acquired at different times of the lunar day, it is found that the average OH absorption depth is higher in the morning than at midday. In the morning a dependence of the OH absorption depth on the local surface temperature is observed, which is no more apparent at midday. This suggests that water/OH accumulates on the surface during the lunar night and largely disappears during the first half of the lunar day. We furthermore model the time dependence of the OH fraction remaining on the surface after having been exposed to the temporally integrated solar flux. In the morning, the OH absorption depth is not correlated with the remaining fraction of OH-containing species, indicating that the removal of water and/or OH-bearing species is mainly due to thermal evaporation after sunrise. In contrast, at midday the OH absorption depth increases with increasing remaining fraction of OH-containing species, suggesting photolysis by solar photons as the main mechanism for removal of the remaining OH-containing species later in the lunar day.
Ugalde, A.; Pujades, L.G.; Canas, J.A.; Villasenor, A.
1998-01-01
Northeastern Venezuela has been studied in terms of coda wave attenuation using seismograms from local earthquakes recorded by a temporary short-period seismic network. The studied area has been separated into two subregions in order to investigate lateral variations in the attenuation parameters. Coda-Q-1 (Q(c)-1) has been obtained using the single-scattering theory. The contribution of the intrinsic absorption (Q(i)-1) and scattering (Q(s)-1) to total attenuation (Q(t)-1) has been estimated by means of a multiple lapse time window method, based on the hypothesis of multiple isotropic scattering with uniform distribution of scatterers. Results show significant spatial variations of attenuation: the estimates for intermediate depth events and for shallow events present major differences. This fact may be related to different tectonic characteristics that may be due to the presence of the Lesser Antilles subduction zone, because the intermediate depth seismic zone may be coincident with the southern continuation of the subducting slab under the arc.
A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters
NASA Technical Reports Server (NTRS)
Beattie, J. R.; Garvin, H. L.
1982-01-01
The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi
2015-01-01
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003
Sparse estimation of model-based diffuse thermal dust emission
NASA Astrophysics Data System (ADS)
Irfan, Melis O.; Bobin, Jérôme
2018-03-01
Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.
NASA Astrophysics Data System (ADS)
Letort, Jean; Retailleau, Lise; Boué, Pierre; Radiguet, Mathilde; Gardonio, Blandine; Cotton, Fabrice; Campillo, Michel
2018-05-01
Detections of pP and sP phase arrivals (the so-called depth phases) at teleseismic distance provide one of the best ways to estimate earthquake focal depth, as the P-pP and the P-sP delays are strongly dependent on the depth. Based on a new processing workflow and using a single seismic array at teleseismic distance, we can estimate the depth of clusters of small events down to magnitude Mb 3.5. Our method provides a direct view of the relative variations of the seismicity depth from an active area. This study focuses on the application of this new methodology to study the lateral variations of the Guerrero subduction zone (Mexico) using the Eielson seismic array in Alaska (USA). After denoising the signals, 1232 Mb 3.5 + events were detected, with clear P, pP, sP and PcP arrivals. A high-resolution view of the lateral variations of the depth of the seismicity of the Guerero-Oaxaca area is thus obtained. The seismicity is shown to be mainly clustered along the interface, coherently following the geometry of the plate as constrained by the receiver-function analysis along the Meso America Subduction Experiment profile. From this study, the hypothesis of tears on the western part of Guerrero and the eastern part of Oaxaca are strongly confirmed by dramatic lateral changes in the depth of the earthquake clusters. The presence of these two tears might explain the observed lateral variations in seismicity, which is correlated with the boundaries of the slow slip events.
NASA Astrophysics Data System (ADS)
Munz, Matthias; Oswald, Sascha E.; Schmidt, Christian
2017-04-01
The application of heat as a hydrological tracer has become a standard method for quantifying water fluxes between groundwater and surface water. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. The underlying assumption of a stationary, one-dimensional vertical flow field is frequently violated in natural systems. Here subsurface water flow often has a significant horizontal component. We developed a methodology for identifying the geometry of the subsurface flow field based on the variations of diurnal temperature amplitudes with depths. For instance: Purely vertical heat transport is characterized by an exponential decline of temperature amplitudes with increasing depth. Pure horizontal flow would be indicated by a constant, depth independent vertical amplitude profile. The decline of temperature amplitudes with depths could be fitted by polynomials of different order whereby the best fit was defined by the highest Akaike Information Criterion. The stepwise model optimization and selection, evaluating the shape of vertical amplitude ratio profiles was used to determine the predominant subsurface flow field, which could be systematically categorized in purely vertical and horizontal (hyporheic, parafluvial) components. Analytical solutions to estimate water fluxes from the observed temperatures are restricted to specific boundary conditions such as a sinusoidal upper temperature boundary. In contrast numerical solutions offer higher flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. There are several numerical models that simulate heat transport in porous media (e.g. VS2DH, HydroGeoSphere, FEFLOW) but there can be a steep learning curve to the modelling frameworks and may therefore not readily accessible to routinely infer water fluxes between groundwater and surface water. We developed a user-friendly, straightforeward to use software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB that calculates time variable vertical water fluxes in saturated sediments based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation (FLUX-BOT can be downloaded from the following web site: https://bitbucket.org/flux-bot/flux-bot). We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance. Both, the empirical analysis of temperature amplitudes as well as the numerical inversion of measured temperature time series to estimate the vertical magnitude of water fluxes extent the suite of current heat tracing methods and may provide insight into temperature data from an additional perspective.
Stages as models of scene geometry.
Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark
2010-09-01
Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.
G-CNV: A GPU-Based Tool for Preparing Data to Detect CNVs with Read-Depth Methods.
Manconi, Andrea; Manca, Emanuele; Moscatelli, Marco; Gnocchi, Matteo; Orro, Alessandro; Armano, Giuliano; Milanesi, Luciano
2015-01-01
Copy number variations (CNVs) are the most prevalent types of structural variations (SVs) in the human genome and are involved in a wide range of common human diseases. Different computational methods have been devised to detect this type of SVs and to study how they are implicated in human diseases. Recently, computational methods based on high-throughput sequencing (HTS) are increasingly used. The majority of these methods focus on mapping short-read sequences generated from a donor against a reference genome to detect signatures distinctive of CNVs. In particular, read-depth based methods detect CNVs by analyzing genomic regions with significantly different read-depth from the other ones. The pipeline analysis of these methods consists of four main stages: (i) data preparation, (ii) data normalization, (iii) CNV regions identification, and (iv) copy number estimation. However, available tools do not support most of the operations required at the first two stages of this pipeline. Typically, they start the analysis by building the read-depth signal from pre-processed alignments. Therefore, third-party tools must be used to perform most of the preliminary operations required to build the read-depth signal. These data-intensive operations can be efficiently parallelized on graphics processing units (GPUs). In this article, we present G-CNV, a GPU-based tool devised to perform the common operations required at the first two stages of the analysis pipeline. G-CNV is able to filter low-quality read sequences, to mask low-quality nucleotides, to remove adapter sequences, to remove duplicated read sequences, to map the short-reads, to resolve multiple mapping ambiguities, to build the read-depth signal, and to normalize it. G-CNV can be efficiently used as a third-party tool able to prepare data for the subsequent read-depth signal generation and analysis. Moreover, it can also be integrated in CNV detection tools to generate read-depth signals.
NASA Technical Reports Server (NTRS)
Yeh, Pat J.-F.; Swenson, S. C.; Famiglietti, J. S.; Rodell, M.
2007-01-01
Regional groundwater storage changes in Illinois are estimated from monthly GRACE total water storage change (TWSC) data and in situ measurements of soil moisture for the period 2002-2005. Groundwater storage change estimates are compared to those derived from the soil moisture and available well level data. The seasonal pattern and amplitude of GRACE-estimated groundwater storage changes track those of the in situ measurements reasonably well, although substantial differences exist in month-to-month variations. The seasonal cycle of GRACE TWSC agrees well with observations (correlation coefficient = 0.83), while the seasonal cycle of GRACE-based estimates of groundwater storage changes beneath 2 m depth agrees with observations with a correlation coefficient of 0.63. We conclude that the GRACE-based method of estimating monthly to seasonal groundwater storage changes performs reasonably well at the 200,000 sq km scale of Illinois.
NASA Astrophysics Data System (ADS)
Jin, Honglin; Kato, Teruyuki; Hori, Muneo
2007-07-01
An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.
Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter
NASA Astrophysics Data System (ADS)
Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao
2017-11-01
Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.
Blakely, Richard J.
1981-01-01
Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
Hagihara, Rie; Jones, Rhondda E; Sobtzick, Susan; Cleguer, Christophe; Garrigue, Claire; Marsh, Helene
2018-01-01
The probability of an aquatic animal being available for detection is typically <1. Accounting for covariates that reduce the probability of detection is important for obtaining robust estimates of the population abundance and determining its status and trends. The dugong (Dugong dugon) is a bottom-feeding marine mammal and a seagrass community specialist. We hypothesized that the probability of a dugong being available for detection is dependent on water depth and that dugongs spend more time underwater in deep-water seagrass habitats than in shallow-water seagrass habitats. We tested this hypothesis by quantifying the depth use of 28 wild dugongs fitted with GPS satellite transmitters and time-depth recorders (TDRs) at three sites with distinct seagrass depth distributions: 1) open waters supporting extensive seagrass meadows to 40 m deep (Torres Strait, 6 dugongs, 2015); 2) a protected bay (average water depth 6.8 m) with extensive shallow seagrass beds (Moreton Bay, 13 dugongs, 2011 and 2012); and 3) a mixture of lagoon, coral and seagrass habitats to 60 m deep (New Caledonia, 9 dugongs, 2013). The fitted instruments were used to measure the times the dugongs spent in the experimentally determined detection zones under various environmental conditions. The estimated probability of detection was applied to aerial survey data previously collected at each location. In general, dugongs were least available for detection in Torres Strait, and the population estimates increased 6-7 fold using depth-specific availability correction factors compared with earlier estimates that assumed homogeneous detection probability across water depth and location. Detection probabilities were higher in Moreton Bay and New Caledonia than Torres Strait because the water transparency in these two locations was much greater than in Torres Strait and the effect of correcting for depth-specific detection probability much less. The methodology has application to visual survey of coastal megafauna including surveys using Unmanned Aerial Vehicles.
Rapid depth estimation for compact magnetic sources using a semi-automated spectrum-based method
NASA Astrophysics Data System (ADS)
Clifton, Roger
2017-04-01
This paper describes a spectrum-based algorithmic procedure for rapid reconnaissance for compact bodies at depths of interest using magnetic line data. The established method of obtaining depth to source from power spectra requires an interpreter to subjectively select just a single slope along the power spectrum. However, many slopes along the spectrum are, at least partially, indicative of the depth if the shape of the source is known. In particular, if the target is assumed to be a point dipole, all spectral slopes are determined by the depth, noise permitting. The concept of a `depth spectrum' is introduced, where the power spectrum in a travelling window or gate of data is remapped so that a single dipole in the gate would be represented as a straight line at its depth on the y-axis of the spectrum. In demonstration, the depths of two known ironstones are correctly displayed. When a second body is in the gate, the two anomalies interfere, leaving interference patterns on the depth spectra that are themselves diagnostic. A formula has been derived for the purpose. Because there is no need for manual selection of slopes along the spectrum, the process runs rapidly along flight lines with a continuously varying display, where the interpreter can pick out a persistent depth signal among the more rapidly varying noise. Interaction is nevertheless necessary, because the interpreter often needs to pass across an anomaly of interest several times, separating out interfering bodies, and resolving the slant range to the body from adjacent flight lines. Because a look-up table is used rather than a formula, the elementary structure used for the mapping can be adapted by including an extra dipole, possibly with a different inclination.
The Generation of a Stochastic Flood Event Catalogue for Continental USA
NASA Astrophysics Data System (ADS)
Quinn, N.; Wing, O.; Smith, A.; Sampson, C. C.; Neal, J. C.; Bates, P. D.
2017-12-01
Recent advances in the acquisition of spatiotemporal environmental data and improvements in computational capabilities has enabled the generation of large scale, even global, flood hazard layers which serve as a critical decision-making tool for a range of end users. However, these datasets are designed to indicate only the probability and depth of inundation at a given location and are unable to describe the likelihood of concurrent flooding across multiple sites.Recent research has highlighted that although the estimation of large, widespread flood events is of great value to flood mitigation and insurance industries, to date it has been difficult to deal with this spatial dependence structure in flood risk over relatively large scales. Many existing approaches have been restricted to empirical estimates of risk based on historic events, limiting their capability of assessing risk over the full range of plausible scenarios. Therefore, this research utilises a recently developed model-based approach to describe the multisite joint distribution of extreme river flows across continental USA river gauges. Given an extreme event at a site, the model characterises the likelihood neighbouring sites are also impacted. This information is used to simulate an ensemble of plausible synthetic extreme event footprints from which flood depths are extracted from an existing global flood hazard catalogue. Expected economic losses are then estimated by overlaying flood depths with national datasets defining asset locations, characteristics and depth damage functions. The ability of this approach to quantify probabilistic economic risk and rare threshold exceeding events is expected to be of value to those interested in the flood mitigation and insurance sectors.This work describes the methodological steps taken to create the flood loss catalogue over a national scale; highlights the uncertainty in the expected annual economic vulnerability within the USA from extreme river flows; and presents future developments to the modelling approach.
NASA Astrophysics Data System (ADS)
Alawadi, Wisam; Al-Rekabi, Wisam S.; Al-Aboodi, Ali H.
2018-03-01
The Shiono and Knight Method (SKM) is widely used to predict the lateral distribution of depth-averaged velocity and boundary shear stress for flows in compound channels. Three calibrating coefficients need to be estimated for applying the SKM, namely eddy viscosity coefficient ( λ), friction factor ( f) and secondary flow coefficient ( k). There are several tested methods which can satisfactorily be used to estimate λ, f. However, the calibration of secondary flow coefficients k to account for secondary flow effects correctly is still problematic. In this paper, the calibration of secondary flow coefficients is established by employing two approaches to estimate correct values of k for simulating asymmetric compound channel with different side slopes of the internal wall. The first approach is based on Abril and Knight (2004) who suggest fixed values for main channel and floodplain regions. In the second approach, the equations developed by Devi and Khatua (2017) that relate the variation of the secondary flow coefficients with the relative depth ( β) and width ratio ( α) are used. The results indicate that the calibration method developed by Devi and Khatua (2017) is a better choice for calibrating the secondary flow coefficients than using the first approach which assumes a fixed value of k for different flow depths. The results also indicate that the boundary condition based on the shear force continuity can successfully be used for simulating rectangular compound channels, while the continuity of depth-averaged velocity and its gradient is accepted boundary condition in simulations of trapezoidal compound channels. However, the SKM performance for predicting the boundary shear stress over the shear layer region may not be improved by only imposing the suitable calibrated values of secondary flow coefficients. This is because difficulties of modelling the complex interaction that develops between the flows in the main channel and on the floodplain in this region.
Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.
Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin
2018-04-02
Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.
NASA Astrophysics Data System (ADS)
Netburn, Amanda N.; Anthony Koslow, J.
2015-10-01
Climate change-induced ocean deoxygenation is expected to exacerbate hypoxic conditions in mesopelagic waters off the coast of southern California, with potentially deleterious effects for the resident fauna. In order to understand the possible impacts that the oxygen minimum zone expansion will have on these animals, we investigated the response of the depth of the deep scattering layer (i.e., upper and lower boundaries) to natural variations in midwater oxygen concentrations, light levels, and temperature over time and space in the southern California Current Ecosystem. We found that the depth of the lower boundary of the deep scattering layer (DSL) is most strongly correlated with dissolved oxygen concentration, and irradiance and oxygen concentration are the key variables determining the upper boundary. Based on our correlations and published estimates of annual rates of change to irradiance level and hypoxic boundary, we estimated the corresponding annual rate of change of DSL depths. If past trends continue, the upper boundary is expected to shoal at a faster rate than the lower boundary, effectively widening the DSL under climate change scenarios. These results have important implications for the future of pelagic ecosystems, as a change to the distribution of mesopelagic animals could affect pelagic food webs as well as biogeochemical cycles.
A Water Temperature Simulation Model for Rice Paddies With Variable Water Depths
NASA Astrophysics Data System (ADS)
Maruyama, Atsushi; Nemoto, Manabu; Hamasaki, Takahiro; Ishida, Sachinobu; Kuwagata, Tsuneo
2017-12-01
A water temperature simulation model was developed to estimate the effects of water management on the thermal environment in rice paddies. The model was based on two energy balance equations: for the ground and for the vegetation, and considered the water layer and changes in the aerodynamic properties of its surface with water depth. The model was examined with field experiments for water depths of 0 mm (drained conditions) and 100 mm (flooded condition) at two locations. Daily mean water temperatures in the flooded condition were mostly higher than in the drained condition in both locations, and the maximum difference reached 2.6°C. This difference was mainly caused by the difference in surface roughness of the ground. Heat exchange by free convection played an important role in determining water temperature. From the model simulation, the temperature difference between drained and flooded conditions was more apparent under low air temperature and small leaf area index conditions; the maximum difference reached 3°C. Most of this difference occurred when the range of water depth was lower than 50 mm. The season-long variation in modeled water temperature showed good agreement with an observation data set from rice paddies with various rice-growing seasons, for a diverse range of water depths (root mean square error of 0.8-1.0°C). The proposed model can estimate water temperature for a given water depth, irrigation, and drainage conditions, which will improve our understanding of the effect of water management on plant growth and greenhouse gas emissions through the thermal environment of rice paddies.
Baum, Rex L.
2017-01-01
Thickness of colluvium or regolith overlying bedrock or other consolidated materials is a major factor in determining stability of unconsolidated earth materials on steep slopes. Many efforts to model spatially distributed slope stability, for example to assess susceptibility to shallow landslides, have relied on estimates of constant thickness, constant depth, or simple models of thickness (or depth) based on slope and other topographic variables. Assumptions of constant depth or thickness rarely give satisfactory results. Geomorphologists have devised a number of different models to represent the spatial variability of regolith depth and applied them to various settings. I have applied some of these models that can be implemented numerically to different study areas with different types of terrain and tested the results against available depth measurements and landslide inventories. The areas include crystalline rocks of the Colorado Front Range, and gently dipping sedimentary rocks of the Oregon Coast Range. Model performance varies with model, terrain type, and with quality of the input topographic data. Steps in contour-derived 10-m digital elevation models (DEMs) introduce significant errors into the predicted distribution of regolith and landslides. Scan lines, facets, and other artifacts further degrade DEMs and model predictions. Resampling to a lower grid-cell resolution can mitigate effects of facets in lidar DEMs of areas where dense forest severely limits ground returns. Due to its higher accuracy and ability to penetrate vegetation, lidar-derived topography produces more realistic distributions of cover and potential landslides than conventional photogrammetrically derived topographic data.
Towards a novel look on low-frequency climate reconstructions
NASA Astrophysics Data System (ADS)
Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti
2010-05-01
Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.
NASA Technical Reports Server (NTRS)
Laymon, Charles A.; Crosson, William L.; Jackson, Thomas J.; Manu, Andrew; Tsegaye, Teferi D.; Soman, V.; Arnold, James E. (Technical Monitor)
2001-01-01
Accurate estimates of spatially heterogeneous algorithm variables and parameters are required in determining the spatial distribution of soil moisture using radiometer data from aircraft and satellites. A ground-based experiment in passive microwave remote sensing of soil moisture was conducted in Huntsville, Alabama from July 1-14, 1996 to study retrieval algorithms and their sensitivity to variable and parameter specification. With high temporal frequency observations at S and L band, we were able to observe large scale moisture changes following irrigation and rainfall events, as well as diurnal behavior of surface moisture among three plots, one bare, one covered with short grass and another covered with alfalfa. The L band emitting depth was determined to be on the order of 0-3 or 0-5 cm below 0.30 cubic centimeter/cubic centimeter with an indication of a shallower emitting depth at higher moisture values. Surface moisture behavior was less apparent on the vegetated plots than it was on the bare plot because there was less moisture gradient and because of difficulty in determining vegetation water content and estimating the vegetation b parameter. Discrepancies between remotely sensed and gravimetric, soil moisture estimates on the vegetated plots point to an incomplete understanding of the requirements needed to correct for the effects of vegetation attenuation. Quantifying the uncertainty in moisture estimates is vital if applications are to utilize remotely-sensed soil moisture data. Computations based only on the real part of the complex dielectric constant and/or an alternative dielectric mixing model contribute a relatively insignificant amount of uncertainty to estimates of soil moisture. Rather, the retrieval algorithm is much more sensitive to soil properties, surface roughness and biomass.
Fiber-optic annular detector array for large depth of field photoacoustic macroscopy.
Bauer-Marschallinger, Johannes; Höllinger, Astrid; Jakoby, Bernhard; Burgholzer, Peter; Berer, Thomas
2017-03-01
We report on a novel imaging system for large depth of field photoacoustic scanning macroscopy. Instead of commonly used piezoelectric transducers, fiber-optic based ultrasound detection is applied. The optical fibers are shaped into rings and mainly receive ultrasonic signals stemming from the ring symmetry axes. Four concentric fiber-optic rings with varying diameters are used in order to increase the image quality. Imaging artifacts, originating from the off-axis sensitivity of the rings, are reduced by coherence weighting. We discuss the working principle of the system and present experimental results on tissue mimicking phantoms. The lateral resolution is estimated to be below 200 μm at a depth of 1.5 cm and below 230 μm at a depth of 4.5 cm. The minimum detectable pressure is in the order of 3 Pa. The introduced method has the potential to provide larger imaging depths than acoustic resolution photoacoustic microscopy and an imaging resolution similar to that of photoacoustic computed tomography.
DOT National Transportation Integrated Search
2012-12-01
CAPWAP analyses of open-ended steel pipe piles at 32 bridge sites in Alaska have been compiled with geotechnical and construction : information for 12- to 48-inch diameter piles embedded in predominantly cohesionless soils to maximum depths of 161-fe...
NASA Astrophysics Data System (ADS)
Sneddon, R. V.
1982-07-01
The VESY-3-A mechanistic design system for asphalt pavements was field verified for three pavement sections at two test sites in Nebraska. PSI predictions from VESYS were in good agreement with field measurements for a 20 year old 3 layer pavement located near Elmwood, Nebraska. Field measured PSI values for an 8 in. full depth pavement also agreed with VESYS predictions for the study period. Rut depth estimates from the model were small and were in general agreement with field measurements. Cracking estimates were poor and tended to underestimate the time required to develop observable fatigue cracking in the field. Asphalt, base course and subgrade materials were tested in a 4.0 in. diameter modified triaxial cell. Test procedures used dynamic conditioning and rest periods to simulate service conditions.
What We Do Not Yet Know About Global Ocean Depths, and How Satellite Altimetry Can Help
NASA Astrophysics Data System (ADS)
Smith, W. H. F.; Sandwell, D. T.; Marks, K. M.
2017-12-01
Half Earth's ocean floor area lies several km or more away from the nearest depth measurement. Areas more than 50 km from any sounding sum to a total area larger than the entire United States land area; areas more than 100 km from any sounding comprise a total area larger than Alaska. In remote basins the majority of available data were collected before the mid-1960s, and so often are mis-located by many km, as well as mis-digitized. Satellite altimetry has mapped the marine gravity field with better than 10 km horizontal resolution, revealing nearly all seamounts taller than 2 km; new data can detect some seamounts less than 1 km tall. Seafloor topography can be estimated from satellite altimetry if sediment is thin and relief is due to seafloor spreading and mid-plate volcanism. The accuracy of the estimate depends on the geological nature of the relief and on the accuracy of the soundings available to calibrate the estimation. At best, the estimate is a band-pass-filtered version of the true depth variations, but does not resolve the small-scale seafloor roughness needed to model mixing and dissipation in the ocean. In areas of thick or variable sediment cover there can be little correlation between depth and altimetry. Yet altimeter-estimated depth is the best guess available in most of the ocean. The MH370 search area provides an illustration. Prior to the search it was very sparsely (1% to 5%) covered by soundings, many of these were old, low-tech data, and plateaus with thick sediments complicate the estimation of depth from altimetry. Even so, the estimate was generally correct about the tectonic nature of the terrain and the extent of depth variations to be expected. If ships will fill gaps strategically, visiting areas where altimetry shows that interesting features will be found, and passing near the centroids of the larger gaps, the data will be exciting in their own right and will also improve future altimetry estimates.
Global root zone storage capacity from satellite-based evaporation data
NASA Astrophysics Data System (ADS)
Wang-Erlandsson, Lan; Bastiaanssen, Wim; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel; van Dijk, Albert; Guerschman, Juan; Keys, Patrick; Gordon, Line; Savenije, Hubert
2016-04-01
We present an "earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale-independent. In contrast to traditional look-up table approaches, our method captures the variability in root zone storage capacity within land cover type, including in rainforests where direct measurements of root depth otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. We find that evergreen forests are able to create a large storage to buffer for extreme droughts (with a return period of up to 60 years), in contrast to short vegetation and crops (which seem to adapt to a drought return period of about 2 years). The presented method to estimate root zone storage capacity eliminates the need for soils and rooting depth information, which could be a game-changer in global land surface modelling.
NASA Astrophysics Data System (ADS)
Sanchez, A. R.; Laguna, A.; Reimann, T.; Giráldez, J. V.; Peña, A.; Wallinga, J.; Vanwalleghem, T.
2017-12-01
Different geomorphological processes such as bioturbation and erosion-deposition intervene in soil formation and landscape evolution. The latter processes produce the alteration and degradation of the materials that compose the rocks. The degree to which the bedrock is weathered is estimated through the fraction of the bedrock which is mixing in the soil either vertically or laterally. This study presents an analytical solution for the diffusion-advection equation to quantify bioturbation and erosion-depositions rates in profiles along a catena. The model is calibrated with age-depth data obtained from profiles using the luminescence dating based on single grain Infrared Stimulated Luminescence (IRSL). Luminescence techniques contribute to a direct measurement of the bioturbation and erosion-deposition processes. Single-grain IRSL techniques is applied to feldspar minerals of fifteen samples which were collected from four soil profiles at different depths along a catena in Santa Clotilde Critical Zone Observatory, Cordoba province, SE Spain. A sensitivity analysis is studied to know the importance of the parameters in the analytical model. An uncertainty analysis is carried out to stablish the better fit of the parameters to the measured age-depth data. The results indicate a diffusion constant at 20 cm in depth of 47 (mm2/year) in the hill-base profile and 4.8 (mm2/year) in the hilltop profile. The model has high uncertainty in the estimation of erosion and deposition rates. This study reveals the potential of luminescence single-grain techniques to quantify pedoturbation processes.
Methods of Estimating Initial Crater Depths on Icy Satellites using Stereo Topography
NASA Astrophysics Data System (ADS)
Persaud, D. M.; Phillips, C. B.
2014-12-01
Stereo topography, combined with models of viscous relaxation of impact craters, allows for the study of the rheology and thermal history of icy satellites. An important step in calculating relaxation of craters is determining the initial depths of craters before viscous relaxation. Two methods for estimating initial crater depths on the icy satellites of Saturn have been previously discussed. White and Schenk (2013) present the craters of Iapetus as relatively unrelaxed in modeling the relaxation of craters of Rhea. Phillips et al. (2013) assume that Herschel crater on Saturn's satellite Mimas is unrelaxed in relaxation calculations and models of Rhea and Dione. In the second method, the depth of Herschel crater is scaled based on the different crater diameters and the difference in surface gravity on the large moons to predict the initial crater depths for Rhea and Dione. In the first method, since Iapetus is of similar size to Dione and Rhea, no gravity scaling is necessary; craters of similar size on Iapetus were chosen and their depths measured to determine the appropriate initial crater depths for Rhea. We test these methods by first extracting topographic profiles of impact craters on Iapetus from digital elevation models (DEMs) constructed from stereo images from the Cassini ISS instrument. We determined depths from these profiles and used them to calculate initial crater depths and relaxation percentages for Rhea and Dione craters using the methods described above. We first assumed that craters on Iapetus were relaxed, and compared the results to previously calculated relaxation percentages for Rhea and Dione relative to Herschel crater (with appropriate scaling for gravity and crater diameter). We then tested the assumption that craters on Iapetus were unrelaxed and used our new measurements of crater depth to determine relaxation percentages for Dione and Rhea. We will present results and conclusions from both methods and discuss their efficacy for determining initial crater depth. References: Phillips, C.B., et al. (2013). Lunar Planet Sci. XLIV, abstract 2766. White, O.L., and P.L. Schenk. Icarus 23, 699-709, 2013. This work was supported by the NASA Outer Planets Research Program grant NNX10AQ09G and by the NSF REU Program.
NASA Astrophysics Data System (ADS)
Dondurur, Derman
2005-11-01
The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.
Yong, Alan K.; Hough, Susan E.; Iwahashi, Junko; Braverman, Amy
2012-01-01
We present an approach based on geomorphometry to predict material properties and characterize site conditions using the VS30 parameter (time‐averaged shear‐wave velocity to a depth of 30 m). Our framework consists of an automated terrain classification scheme based on taxonomic criteria (slope gradient, local convexity, and surface texture) that systematically identifies 16 terrain types from 1‐km spatial resolution (30 arcsec) Shuttle Radar Topography Mission digital elevation models (SRTM DEMs). Using 853 VS30 values from California, we apply a simulation‐based statistical method to determine the mean VS30 for each terrain type in California. We then compare the VS30 values with models based on individual proxies, such as mapped surface geology and topographic slope, and show that our systematic terrain‐based approach consistently performs better than semiempirical estimates based on individual proxies. To further evaluate our model, we apply our California‐based estimates to terrains of the contiguous United States. Comparisons of our estimates with 325 VS30 measurements outside of California, as well as estimates based on the topographic slope model, indicate our method to be statistically robust and more accurate. Our approach thus provides an objective and robust method for extending estimates of VS30 for regions where in situ measurements are sparse or not readily available.
Defining the ecologically relevant mixed-layer depth for Antarctica's coastal seas
NASA Astrophysics Data System (ADS)
Carvalho, Filipa; Kohut, Josh; Oliver, Matthew J.; Schofield, Oscar
2017-01-01
Mixed-layer depth (MLD) has been widely linked to phytoplankton dynamics in Antarctica's coastal regions; however, inconsistent definitions have made intercomparisons among region-specific studies difficult. Using a data set with over 20,000 water column profiles corresponding to 32 Slocum glider deployments in three coastal Antarctic regions (Ross Sea, Amundsen Sea, and West Antarctic Peninsula), we evaluated the relationship between MLD and phytoplankton vertical distribution. Comparisons of these MLD estimates to an applied definition of phytoplankton bloom depth, as defined by the deepest inflection point in the chlorophyll profile, show that the maximum of buoyancy frequency is a good proxy for an ecologically relevant MLD. A quality index is used to filter profiles where MLD is not determined. Despite the different regional physical settings, we found that the MLD definition based on the maximum of buoyancy frequency best describes the depth to which phytoplankton can be mixed in Antarctica's coastal seas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elrick, M.; Read, J.F.
1990-05-01
Three types of 1-10-m upward-shallowing cycles are observed in the Lower Mississippian Lodgepole and lower Madison formations of Wyoming and Montana. Typical peritidal cycles have pellet grainstone bases overlain by algal laminites, which are rarely capped by paleosol/regolith horizons. Shallow ramp cycles have burrowed pellet-skeletal wackestone bases overlain by cross-bedded ooid/crinoid grainstone caps. Deep ramp cycles are characterized by sub-wave base limestone/argillite, storm-deposited limestone, overlain by hummocky stratified grainstone caps. Average cycle periods range from 17-155 k.y. This, rhythmically bedded limestone/argillite deposits of basinal facies do not contain shallowing-upward cycles, but do contain 2-4 k.y. limestone/argillite rhythms. These sub-wave basemore » deposit are associated with Waulsortian-type mud mounds which have >50 m synoptic relief. This relief provides minimum water depth estimates for the deposits, and implies storm-wave base was less than 50 m. Two-dimensional computer modeling of cyclic platform through noncyclic basinal deposits allows for bracketing of fifth-order sea level fluctuation amplitudes, thought responsible for cycle formation. Computer models using fifth-order amplitudes less than 20 m do not produce cycles on the deep ramp (assuming a 25-30 m storm-wave base). Amplitudes >30 m produce water depths on the inner ramp that are too deep, and disconformities extend too far into the basin. The absence of meter-scale cycles in the basin suggests water depths were too great to record the effects of sea level oscillations occurring on the platform, or climatic fluctuation, associated with glacio-eustatic sea level oscillations, were not sufficient to affect hemipelagic depositional patterns in the tropical basin environment.« less
Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas
2013-08-01
This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.
NASA Astrophysics Data System (ADS)
Collins, M. S.; Hertzberg, J. E.; Mekik, F.; Schmidt, M. W.
2017-12-01
Based on the thermodynamics of solid-solution substitution of Mg for Ca in biogenic calcite, magnesium to calcium ratios in planktonic foraminifera have been proposed as a means by which variations in habitat water temperatures can be reconstructed. Doing this accurately has been a problem, however, as we demonstrate that various calibration equations provide disparate temperature estimates from the same Mg/Ca dataset. We examined both new and published data to derive a globally applicable temperature-Mg/Ca relationship and from this relationship to accurately predict habitat depth for Neogloboquadrina dutertrei - a deep chlorophyll maximum dweller. N. dutertrei samples collected from Atlantic core tops were analyzed for trace element compositions at Texas A&M University, and the measured Mg/Ca ratios were used to predict habitat temperatures using multiple pre-existing calibration equations. When combining Atlantic and previously published Pacific Mg/Ca datasets for N. dutertrei, a notable dissolution effect was evident. To overcome this issue, we used the G. menardii Fragmentation Index (MFI) to account for dissolution and generated a multi-basin temperature equation using multiple linear regression to predict habitat temperature. However, the correlations between Mg/Ca and temperature, as well as the calculated MFI percent dissolved, suggest that N. dutertrei Mg/Ca ratios are affected equally by both variables. While correcting for dissolution makes habitat depth estimation more accurate, the lack of a definitively strong correlation between Mg/Ca and temperature is likely an effect of variable habitat depth for this species because most calibration equations have assumed a uniform habitat depth for this taxon.
NASA Astrophysics Data System (ADS)
Poudjom Djomani, Y. H.; Diament, M.; Albouy, Y.
1992-07-01
The Adamawa massif in Central Cameroon is one of the African domal uplifts of volcanic origin. It is an elongated feature, 200 km wide. The gravity anomalies over the Adamawa uplift were studied to determine the mechanical behaviour of the lithosphere. Two approaches were used to analyse six gravity profiles that are 600 km long and that run perpendicular to the Adamawa trend. Firstly, the coherence function between topography and gravity was interpreted; secondly, source depth estimations by spectral analysis of the gravity data was performed. To get significant information for the interpretation of the experimental coherence function, the length of the profiles was varied from 320 km to 600 km. This treatment allows one to obtain numerical estimates of the coherence function. The coherence function analysis points out that the lithosphere is deflected and thin beneath the Adamawa uplift, and the Effective Elastic Thickness is of about 20 km. To fit the coherence, a load from below needs to be taken into account. This result on the Adamawa massif is of the same order of magnitude as those obtained on other African uplifts such as Hoggar, Darfur and Kenya domes. For the depth estimation, three major density contrasts were found: the shallowest depth (4-15 km) can be correlated to shear zone structures and the associated sedimentary basins beneath the uplift; the second density contrast (18-38 km) corresponds to the Moho; and finally, the last depth (70-90 km) would be the top of the upper mantle and demotes the low density zone beneath the Adamawa uplift.
Estimation of skin concentrations of topically applied lidocaine at each depth profile.
Oshizaka, Takeshi; Kikuchi, Keisuke; Kadhum, Wesam R; Todo, Hiroaki; Hatanaka, Tomomi; Wierzba, Konstanty; Sugibayashi, Kenji
2014-11-20
Skin concentrations of topically administered compounds need to be considered in order to evaluate their efficacies and toxicities. This study investigated the relationship between the skin permeation and concentrations of compounds, and also predicted the skin concentrations of these compounds using their permeation parameters. Full-thickness skin or stripped skin from pig ears was set on a vertical-type diffusion cell, and lidocaine (LID) solution was applied to the stratum corneum (SC) in order to determine in vitro skin permeability. Permeation parameters were obtained based on Fick's second law of diffusion. LID concentrations at each depth of the SC were measured using tape-stripping. Concentration-depth profiles were obtained from viable epidermis and dermis (VED) by analyzing horizontal sections. The corresponding skin concentration at each depth was calculated based on Fick's law using permeation parameters and then compared with the observed value. The steady state LID concentrations decreased linearly as the site became deeper in SC or VED. The calculated concentration-depth profiles of the SC and VED were almost identical to the observed profiles. The compound concentration at each depth could be easily predicted in the skin using diffusion equations and skin permeation data. Thus, this method was considered to be useful for promoting the efficient preparation of topically applied drugs and cosmetics. Copyright © 2014 Elsevier B.V. All rights reserved.
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.
Donné, Simon; Goossens, Bart; Philips, Wilfried
2017-08-23
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.
NASA Astrophysics Data System (ADS)
Akbar, Ruzbeh; Short Gianotti, Daniel; McColl, Kaighin A.; Haghighi, Erfan; Salvucci, Guido D.; Entekhabi, Dara
2018-03-01
The soil water content profile is often well correlated with the soil moisture state near the surface. They share mutual information such that analysis of surface-only soil moisture is, at times and in conjunction with precipitation information, reflective of deeper soil fluxes and dynamics. This study examines the characteristic length scale, or effective depth Δz, of a simple active hydrological control volume. The volume is described only by precipitation inputs and soil water dynamics evident in surface-only soil moisture observations. To proceed, first an observation-based technique is presented to estimate the soil moisture loss function based on analysis of soil moisture dry-downs and its successive negative increments. Then, the length scale Δz is obtained via an optimization process wherein the root-mean-squared (RMS) differences between surface soil moisture observations and its predictions based on water balance are minimized. The process is entirely observation-driven. The surface soil moisture estimates are obtained from the NASA Soil Moisture Active Passive (SMAP) mission and precipitation from the gauge-corrected Climate Prediction Center daily global precipitation product. The length scale Δz exhibits a clear east-west gradient across the contiguous United States (CONUS), such that large Δz depths (>200 mm) are estimated in wetter regions with larger mean precipitation. The median Δz across CONUS is 135 mm. The spatial variance of Δz is predominantly explained and influenced by precipitation characteristics. Soil properties, especially texture in the form of sand fraction, as well as the mean soil moisture state have a lesser influence on the length scale.
Madu, C N; Quint, D J; Normolle, D P; Marsh, R B; Wang, E Y; Pierce, L J
2001-11-01
To delineate with computed tomography (CT) the anatomic regions containing the supraclavicular (SCV) and infraclavicular (IFV) nodal groups, to define the course of the brachial plexus, to estimate the actual radiation dose received by these regions in a series of patients treated in the traditional manner, and to compare these doses to those received with an optimized dosimetric technique. Twenty patients underwent contrast material-enhanced CT for the purpose of radiation therapy planning. CT scans were used to study the location of the SCV and IFV nodal regions by using outlining of readily identifiable anatomic structures that define the nodal groups. The brachial plexus was also outlined by using similar methods. Radiation therapy doses to the SCV and IFV were then estimated by using traditional dose calculations and optimized planning. A repeated measures analysis of covariance was used to compare the SCV and IFV depths and to compare the doses achieved with the traditional and optimized methods. Coverage by the 90% isodose surface was significantly decreased with traditional planning versus conformal planning as the depth to the SCV nodes increased (P < .001). Significantly decreased coverage by using the 90% isodose surface was demonstrated for traditional planning versus conformal planning with increasing IFV depth (P = .015). A linear correlation was found between brachial plexus depth and SCV depth up to 7 cm. Conformal optimized planning provided improved dosimetric coverage compared with standard techniques.
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-01-01
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-04-12
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.
NASA Astrophysics Data System (ADS)
Snowball, Ian; Mellström, Anette; Ahlstrand, Emelie; Haltia, Eeva; Nilsson, Andreas; Ning, Wenxin; Muscheler, Raimund; Brauer, Achim
2013-11-01
We studied the paleomagnetic properties of relatively organic rich, annually laminated (varved) sediments of Holocene age in Gyltigesjön, which is a lake in southern Sweden. An age-depth model was based on a regional lead pollution isochron and Bayesian modelling of radiocarbon ages of bulk sediments and terrestrial macrofossils, which included a radiocarbon wiggle-matched series of 873 varves that accumulated between 3000 and 2000 Cal a BP (Mellström et al., 2013). Mineral magnetic data and first order reversal curves suggest that the natural remanent magnetization is carried by stable single-domain grains of magnetite, probably of magnetosomal origin. Discrete samples taken from overlapping piston cores were used to produce smoothed paleomagnetic secular variation (inclination and declination) and relative paleointensity data sets. Alternative temporal trends in the paleomagnetic data were obtained by correcting for paleomagnetic lock-in depths between 0 and 70 cm and taking into account changes in sediment accumulation rate. These temporal trends were regressed against reference curves for the same region (FENNOSTACK and FENNORPIS; Snowball et al., 2007). The best statistical matches to the reference curves are obtained when we apply lock-in depths of 21-34 cm to the Gyltigesjön paleomagnetic data, although these are most likely minimum estimates. Our study suggests that a significant paleomagnetic lock-in depth can affect the acquisition of post-depositional remanent magnetization even where bioturbation is absent and no mixed sediment surface layer exists.
Olson, Scott A.
1996-01-01
Contraction scour for all modelled flows ranged from 1.7 to 2.6 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 7.2 to 24.2 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.8 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 6.1 to 11.6 ft. The worst-case abutment scour occurred at the incipient-overtopping discharge, which was 50 cfs lower than the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scouredstreambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particlesize distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.; Degnan, James R.
1997-01-01
Contraction scour computed for all modelled flows was 0.0 ft. Computed left abutment scour ranged from 9.4 to 10.2 ft. with the worst-case scour occurring at the 500-year discharge. Computed right abutment scour ranged from 2.7 to 5.7 ft. with the worst-case scour occurring at the incipient roadway-overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Medalie, Laura
1996-01-01
Contraction scour for all modelled flows ranged from 0.3 to 0.5 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.0 to 8.0 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Song, Donald L.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 1.4 feet. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 2.3 to 8.9 feet. The worst-case abutment scour occurred at the 100-year discharge at the right abutment. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Flynn, Robert H.; Severance, Timothy
1997-01-01
Contraction scour for all modelled flows ranged from 0.7 to 1.3 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 9.1 to 12.5 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ivanoff, Michael A.
1997-01-01
Contraction scour computed for all modelled flows was zero ft. Abutment scour ranged from 6.2 to 9.7 ft. The worst-case abutment scour occurred at the 100-year discharge at the right abutment and at the 500-year discharge at the left abutment. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Striker, L.K.; Ivanoff, M.A.
1997-01-01
Contraction scour for all modelled flows was 0 ft. Abutment scour ranged from 7.6 to 8.4 ft at the left abutment and from 9.9 to 14.8 ft at the right abutment. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A crosssection of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Striker, Lora K.; Severance, Tim
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.4 ft. The worst-case contraction scour occurred at the maximum free surface flow discharge, which was less than the 100-year discharge. Abutment scour ranged from 4.8 to 8.0 ft. The worst-case abutment scour occurred at 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A crosssection of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ivanoff, Michael A.; Hammond, Robert E.
1996-01-01
Contraction scour for all modelled flows ranged from 3.4 to 4.3 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 8.2 to 11.1 ft. The worst-case abutment scour occurred at the 100-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.7 ft. Abutment scour ranged from 9.9 to 16.4 ft. Pier scour ranged from 14.4 to 16.2 ft. The worst-case contraction, abutment, and pier scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ecological and evolutionary consequences of benthic community stasis in the very deep sea (>1500 m)
Buzas, Martin A.; Hayek, Lee-Ann C.; Culver, Stephen J.; Hayward, Bruce W.; Osterman, Lisa E.
2014-01-01
An enigma of deep-sea biodiversity research is that the abyss with its low productivity and densities appears to have a biodiversity similar to that of shallower depths. This conceptualization of similarity is based mainly on per-sample estimates (point diversity, within-habitat, or α-diversity). Here, we use a measure of between-sample within-community diversity (β1H) to examine benthic foraminiferal diversity between 333 stations within 49 communties from New Zealand, the South Atlantic, the Gulf of Mexico, the Norwegian Sea, and the Arctic. The communities are grouped into two depth categories: 200–1500 m and >1500 m. β1H diversity exhibits no evidence of regional differences. Instead, higher values at shallower depths are observed worldwide. At depths of >1500 m the average β1H is zero, indicating stasis or no biodiversity gradient. The difference in β1H-diversity explains why, despite species richness often being greater per sample at deeper depths, the total number of species is greater at shallower depths. The greater number of communities and higher rate of evolution resulting in shorter species durations at shallower depths is also consistent with higher β1H values.
Boehmler, Erick M.; Degnan, James R.
1997-01-01
Contraction scour for all modelled flows ranged from 1.1 to 2.0 feet. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 3.9 to 8.6 feet. The worst-case abutment scour occurred at the 500-year event. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ayotte, Joseph D.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.8 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 5.7 to 10.6 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Striker, Lora K.; Weber, Matthew A.
1998-01-01
Contraction scour for all modelled flows ranged from 2.0 to 3.2 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 9.7 to 22.2 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and Davis, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Hammond, Robert E.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.9 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 3.6 to 7.1 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.
1997-01-01
Contraction scour for all modelled flows ranged from 0.2 to 0.4 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 7.3 to 8.2 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Hammond, Robert E.
1997-01-01
Contraction scour for all modelled flows was zero ft. The left abutment scour ranged from 3.6 to 9.2 ft. The worst-case left abutment scour occurred at the 500-year discharge. The right abutment scour ranged from 9.8 to 12.6 ft. The worst case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Burns, Ronda L.
1997-01-01
There was no predicted contraction scour for any of the modelled flows. Abutment scour ranged from 4.9 to 11.6 ft. The worst-case abutment scour occurred at the right abutment for the 500-year discharge. However, historical information indicates the right abutment is in contact with bedrock at least in part. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Striker, Lora K.; Burns, Rhonda L.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 2.8 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 9.5 to 13.7 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Burns, Ronda L.
1997-01-01
Contraction scour for all modelled flows ranged from 3.2 to 4.3 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 6.0 to 10.0 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Flynn, Robert H.; Burns, Ronda L.
1997-01-01
Contraction scour for all modelled flows ranged from 0.4 to 2.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 8.4 to 30.7 ft. The worst-case abutment scour occurred at the 500-year discharge along the left abutment. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Boehmler, Erick M.
1997-01-01
Contraction scour for all modelled flows ranged from 5.2 to 9.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 13.1 to 18.2 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Striker, Lora K.; Ivanoff, Michael A.
1997-01-01
Contraction scour for all modelled flows was 0.0 ft. Abutment scour ranged from 6.4 to 7.9 ft at the left abutment and from 11.8 to 14.9 ft at the right abutment. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scouredstreambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particlesize distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Striker, Lora K.; Medalie, Laura
1997-01-01
Contraction scour for all modelled flows was 0.0 ft. Abutment scour ranged from 5.8 to 6.8 ft at the left abutment and 9.4 to 14.4 ft at the right abutment. The worst-case abutment scour occurred at the incipient roadway-overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ivanoff, Michael A.; Hammond, Robert E.
1997-01-01
Contraction scour for all modelled flows ranged from 1.8 to 2.6 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 10.2 to 22.6 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ivanoff, Michael A.
1997-01-01
Contraction scour for all modelled flows ranged from 0 to 1.2 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 10.4 to 13.9 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Ivanoff, Michael A.
1997-01-01
Contraction scour for all modelled flows ranged from 0.4 to 0.9 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.5 to 9.1 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Ivanoff, Michael A.; Medalie, Laura
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 1.5 ft. The worst-case contraction scour occurred at the incipient roadway-overtopping discharge, which was less than the 100-year discharge. Abutment scour ranged from 12.4 to 24.4 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scouredstreambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particlesize distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Medalie, Laura
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.5 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.2 to 13.3 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Flynn, Robert H.; Ivanoff, Michael A.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.6 ft. The worst-case contraction scour occurred at the 100-year discharge. Abutment scour ranged from 0.8 to 5.6 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Saleh, Khaled; Hossny, Mohammed; Nahavandi, Saeid
2018-06-12
Traffic collisions between kangaroos and motorists are on the rise on Australian roads. According to a recent report, it was estimated that there were more than 20,000 kangaroo vehicle collisions that occurred only during the year 2015 in Australia. In this work, we are proposing a vehicle-based framework for kangaroo detection in urban and highway traffic environment that could be used for collision warning systems. Our proposed framework is based on region-based convolutional neural networks (RCNN). Given the scarcity of labeled data of kangaroos in traffic environments, we utilized our state-of-the-art data generation pipeline to generate 17,000 synthetic depth images of traffic scenes with kangaroo instances annotated in them. We trained our proposed RCNN-based framework on a subset of the generated synthetic depth images dataset. The proposed framework achieved a higher average precision (AP) score of 92% over all the testing synthetic depth image datasets. We compared our proposed framework against other baseline approaches and we outperformed it with more than 37% in AP score over all the testing datasets. Additionally, we evaluated the generalization performance of the proposed framework on real live data and we achieved a resilient detection accuracy without any further fine-tuning of our proposed RCNN-based framework.
1994-02-15
0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of
NASA Astrophysics Data System (ADS)
Yeom, Seokwon
2013-05-01
Millimeter waves imaging draws increasing attention in security applications for weapon detection under clothing. In this paper, concealed object segmentation and three-dimensional localization schemes are reviewed. A concealed object is segmented by the k-means algorithm. A feature-based stereo-matching method estimates the longitudinal distance of the concealed object. The distance is estimated by the discrepancy between the corresponding centers of the segmented objects. Experimental results are provided with the analysis of the depth resolution.
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
NASA Astrophysics Data System (ADS)
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially interpolating 100-year dew point values rather than a more gauge-based approach. Site specific reviews demonstrated that both issues had potential for lowering the PMP estimate significantly by affecting the in-place and transposed moisture maximization value and, in turn, the final controlling storm for a given basin size and PMP estimate.
NASA Technical Reports Server (NTRS)
Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.
2017-01-01
We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.
Improved biovolume estimation of Microcystis aeruginosa colonies: A statistical approach.
Alcántara, I; Piccini, C; Segura, A M; Deus, S; González, C; Martínez de la Escalera, G; Kruk, C
2018-05-27
The Microcystis aeruginosa complex (MAC) clusters many of the most common freshwater and brackish bloom-forming cyanobacteria. In monitoring protocols, biovolume estimation is a common approach to determine MAC colonies biomass and useful for prediction purposes. Biovolume (μm 3 mL -1 ) is calculated multiplying organism abundance (orgL -1 ) by colonial volume (μm 3 org -1 ). Colonial volume is estimated based on geometric shapes and requires accurate measurements of dimensions using optical microscopy. A trade-off between easy-to-measure but low-accuracy simple shapes (e.g. sphere) and time costly but high-accuracy complex shapes (e.g. ellipsoid) volume estimation is posed. Overestimations effects in ecological studies and management decisions associated to harmful blooms are significant due to the large sizes of MAC colonies. In this work, we aimed to increase the precision of MAC biovolume estimations by developing a statistical model based on two easy-to-measure dimensions. We analyzed field data from a wide environmental gradient (800 km) spanning freshwater to estuarine and seawater. We measured length, width and depth from ca. 5700 colonies under an inverted microscope and estimated colonial volume using three different recommended geometrical shapes (sphere, prolate spheroid and ellipsoid). Because of the non-spherical shape of MAC the ellipsoid resulted in the most accurate approximation, whereas the sphere overestimated colonial volume (3-80) especially for large colonies (MLD higher than 300 μm). Ellipsoid requires measuring three dimensions and is time-consuming. Therefore, we constructed different statistical models to predict organisms depth based on length and width. Splitting the data into training (2/3) and test (1/3) sets, all models resulted in low training (1.41-1.44%) and testing average error (1.3-2.0%). The models were also evaluated using three other independent datasets. The multiple linear model was finally selected to calculate MAC volume as an ellipsoid based on length and width. This work contributes to achieve a better estimation of MAC volume applicable to monitoring programs as well as to ecological research. Copyright © 2017. Published by Elsevier B.V.
Takada, Junya; Honda, Norihiro; Hazama, Hisanao; Ioritani, Naomasa
2016-01-01
Background and Aims: Laser vaporization of the prostate is expected as a less invasive treatment for benign prostatic hyperplasia (BPH), via the photothermal effect. In order to develop safer and more effective laser vaporization of the prostate, it is essential to set optimal irradiation parameters based on quantitative evaluation of temperature distribution and thermally denatured depth in prostate tissue. Method: A simulation model was therefore devised with light propagation and heat transfer calculation, and the vaporized and thermally denatured depths were estimated by the simulation model. Results: The results of the simulation were compared with those of an ex vivo experiment and clinical trial. Based on the accumulated data, the vaporized depth strongly depended on the distance between the optical fiber and the prostate tissue, and it was suggested that contact laser irradiation could vaporize the prostate tissue most effectively. Additionally, it was suggested by analyzing thermally denatured depth comprehensively that laser irradiation at the distance of 3 mm between the optical fiber and the prostate tissue was useful for hemostasis. Conclusions: This study enabled quantitative and reproducible analysis of laser vaporization for BPH and will play a role in clarification of the safety and efficacy of this treatment. PMID:28765672
Materials characterization on efforts for ablative materials
NASA Technical Reports Server (NTRS)
Tytula, Thomas P.; Schad, Kristin C.; Swann, Myles H.
1992-01-01
Experimental efforts to develop a new procedure to measure char depth in carbon phenolic nozzle material are described. Using a Shor Type D Durometer, hardness profiles were mapped across post fired sample blocks and specimens from a fired rocket nozzle. Linear regression was used to estimate the char depth. Results are compared to those obtained from computed tomography in a comparative experiment. There was no significant difference in the depth estimates obtained by the two methods.
NASA Astrophysics Data System (ADS)
Salmi, L. M.; French, S. W.; Romanowicz, B. A.
2014-12-01
Resolving subduction zones in the shallow upper mantle using global shear velocity tomography has long been a challenge, likely due to the rather narrow signature of the slabs down to ~400 km depth compared to the wavelength of fundamental mode and overtone surface waves, on which resolution of Vs at these depths often relies. On the other hand, models based on P wave travel times exhibit higher resolution in subduction zone regions, owing to both the higher frequencies of the P waves as well as an optimal illumination geometry. Conversely, the global Vs models typically have better resolution near the CMB, because of constraints provided by Sdiff and multiple ScS phases. Here we compare the morphology of subducted slabs throughout the mantle, as imaged by both a recent Vp model (GAP_P4, Fukao and Obayashi, 2013) and a new Vs model (SEMUCB-WM1, French and Romanowicz, GJI, in revision). The latter model was developed by inverting body (to 32s) and fundamental and overtone surface (to 60s) waveforms, with the forward seismic wavefield computed using the spectral element method. While the S velocity model is still "fuzzier" than the Vp model, it tracks the behavior of slabs trapped in the transition zone, and those ponding around 1000 km depth. We quantify the high correlation of the region of fast Vp and Vs anomalies, and thus derive a robust estimate of the R=dlnVs/dlnVp ratio as a function of depth in regions of faster than average velocity. We compare these results with estimates obtained with other combinations of available P and S models, as well as theoretical values from mineral physical calculations. Estimating R in slow velocity regions is more difficult, as resolution varies more among models. Here we compare slow velocity images in SEMUCB-WM1 with those of other recent Vs and Vp models and attempt to estimate R in those regions as well. Interestingly, we note that, in the SEMUCB-WM1 model, some of the columnar, lower than average velocity regions "rising" from the CMB through the lower mantle appear to be deflected horizontally at ~1000 km depth. This observation suggests that whatever mechanism causes the resistance to downward flow in subduction zones at this depth may also affect upwellings.
Fast range estimation based on active range-gated imaging for coastal surveillance
NASA Astrophysics Data System (ADS)
Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang
2012-11-01
Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.
Determination of the maximum-depth to potential field sources by a maximum structural index method
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
Diverse Staghorn Coral Fauna on the Mesophotic Reefs of North-East Australia
Muir, Paul; Wallace, Carden; Bridge, Tom C. L.; Bongaerts, Pim
2015-01-01
Concern for the future of reef-building corals in conditions of rising sea temperatures combined with recent technological advances has led to a renewed interest in documenting the biodiversity of mesophotic coral ecosystems (MCEs) and their potential to provide lineage continuation for coral taxa. Here, we examine species diversity of staghorn corals (genera Acropora and Isopora) in the mesophotic zone (below 30 m depth) of the Great Barrier Reef and western Coral Sea. Using specimen-based records we found 38 staghorn species in the mesophotic zone, including three species newly recorded for Australia and five species that only occurred below 30 m. Staghorn corals became scarce at depths below 50 m but were found growing in-situ to 73 m depth. Of the 76 staghorn coral species recorded for shallow waters (depth ≤ 30 m) in north-east Australia, 21% extended to mesophotic depths with a further 22% recorded only rarely to 40 m depth. Extending into the mesophotic zone provided shallow water species no significant advantage in terms of their estimated global range-size relative to species restricted to shallow waters (means 86.2 X 106 km2 and 85.7 X 106 km2 respectively, p = 0.98). We found four staghorn coral species at mesophotic depths on the Great Barrier Reef that were previously considered rare and endangered on the basis of their limited distribution in central Indonesia and the far western Pacific. Colonies below 40 m depth showed laterally flattened branches, light and fragile skeletal structure and increased spacing between branches and corallites. The morphological changes are discussed in relation to decreased light, water movement and down-welling coarse sediments. Staghorn corals have long been regarded as typical shallow-water genera, but here we demonstrate the significant contribution of this group to the region’s mesophotic fauna and the importance of considering MCEs in reef biodiversity estimates and management. PMID:25714341
Diverse staghorn coral fauna on the mesophotic reefs of north-east Australia.
Muir, Paul; Wallace, Carden; Bridge, Tom C L; Bongaerts, Pim
2015-01-01
Concern for the future of reef-building corals in conditions of rising sea temperatures combined with recent technological advances has led to a renewed interest in documenting the biodiversity of mesophotic coral ecosystems (MCEs) and their potential to provide lineage continuation for coral taxa. Here, we examine species diversity of staghorn corals (genera Acropora and Isopora) in the mesophotic zone (below 30 m depth) of the Great Barrier Reef and western Coral Sea. Using specimen-based records we found 38 staghorn species in the mesophotic zone, including three species newly recorded for Australia and five species that only occurred below 30 m. Staghorn corals became scarce at depths below 50 m but were found growing in-situ to 73 m depth. Of the 76 staghorn coral species recorded for shallow waters (depth ≤ 30 m) in north-east Australia, 21% extended to mesophotic depths with a further 22% recorded only rarely to 40 m depth. Extending into the mesophotic zone provided shallow water species no significant advantage in terms of their estimated global range-size relative to species restricted to shallow waters (means 86.2 X 10(6) km2 and 85.7 X 10(6) km2 respectively, p = 0.98). We found four staghorn coral species at mesophotic depths on the Great Barrier Reef that were previously considered rare and endangered on the basis of their limited distribution in central Indonesia and the far western Pacific. Colonies below 40 m depth showed laterally flattened branches, light and fragile skeletal structure and increased spacing between branches and corallites. The morphological changes are discussed in relation to decreased light, water movement and down-welling coarse sediments. Staghorn corals have long been regarded as typical shallow-water genera, but here we demonstrate the significant contribution of this group to the region's mesophotic fauna and the importance of considering MCEs in reef biodiversity estimates and management.
An Empirical Estimation of Underground Thermal Performance for Malaysian Climate
NASA Astrophysics Data System (ADS)
Mukhtar, Azfarizal; Zamri Yusoff, Mohd; Khai Ching, Ng
2017-12-01
In this study, the soil temperature profile was computed based on the harmonic heat transfer equations at various depths. The meteorological data ranging from January, 1st 2016 to December, 31st 2016 measured by local weather stations were employed. The findings indicted that as the soil depth increases, the temperature changes are negligible and the soil temperature is nearly equal to the mean annual air temperature. Likewise, the results have been compared with those reported by other researchers. Overall, the predicted soil temperature can be readily adopted in various engineering applications in Malaysia.
Body density and diving gas volume of the northern bottlenose whale (Hyperoodon ampullatus).
Miller, Patrick; Narazaki, Tomoko; Isojunno, Saana; Aoki, Kagari; Smout, Sophie; Sato, Katsufumi
2016-08-15
Diving lung volume and tissue density, reflecting lipid store volume, are important physiological parameters that have only been estimated for a few breath-hold diving species. We fitted 12 northern bottlenose whales with data loggers that recorded depth, 3-axis acceleration and speed either with a fly-wheel or from change of depth corrected by pitch angle. We fitted measured values of the change in speed during 5 s descent and ascent glides to a hydrodynamic model of drag and buoyancy forces using a Bayesian estimation framework. The resulting estimate of diving gas volume was 27.4±4.2 (95% credible interval, CI) ml kg(-1), closely matching the measured lung capacity of the species. Dive-by-dive variation in gas volume did not correlate with dive depth or duration. Estimated body densities of individuals ranged from 1028.4 to 1033.9 kg m(-3) at the sea surface, indicating overall negative tissue buoyancy of this species in seawater. Body density estimates were highly precise with ±95% CI ranging from 0.1 to 0.4 kg m(-3), which would equate to a precision of <0.5% of lipid content based upon extrapolation from the elephant seal. Six whales tagged near Jan Mayen (Norway, 71°N) had lower body density and were closer to neutral buoyancy than six whales tagged in the Gully (Nova Scotia, Canada, 44°N), a difference that was consistent with the amount of gliding observed during ascent versus descent phases in these animals. Implementation of this approach using longer-duration tags could be used to track longitudinal changes in body density and lipid store body condition of free-ranging cetaceans. © 2016. Published by The Company of Biologists Ltd.
Body density and diving gas volume of the northern bottlenose whale (Hyperoodon ampullatus)
Miller, Patrick; Narazaki, Tomoko; Isojunno, Saana; Aoki, Kagari; Smout, Sophie; Sato, Katsufumi
2016-01-01
ABSTRACT Diving lung volume and tissue density, reflecting lipid store volume, are important physiological parameters that have only been estimated for a few breath-hold diving species. We fitted 12 northern bottlenose whales with data loggers that recorded depth, 3-axis acceleration and speed either with a fly-wheel or from change of depth corrected by pitch angle. We fitted measured values of the change in speed during 5 s descent and ascent glides to a hydrodynamic model of drag and buoyancy forces using a Bayesian estimation framework. The resulting estimate of diving gas volume was 27.4±4.2 (95% credible interval, CI) ml kg−1, closely matching the measured lung capacity of the species. Dive-by-dive variation in gas volume did not correlate with dive depth or duration. Estimated body densities of individuals ranged from 1028.4 to 1033.9 kg m−3 at the sea surface, indicating overall negative tissue buoyancy of this species in seawater. Body density estimates were highly precise with ±95% CI ranging from 0.1 to 0.4 kg m−3, which would equate to a precision of <0.5% of lipid content based upon extrapolation from the elephant seal. Six whales tagged near Jan Mayen (Norway, 71°N) had lower body density and were closer to neutral buoyancy than six whales tagged in the Gully (Nova Scotia, Canada, 44°N), a difference that was consistent with the amount of gliding observed during ascent versus descent phases in these animals. Implementation of this approach using longer-duration tags could be used to track longitudinal changes in body density and lipid store body condition of free-ranging cetaceans. PMID:27296044
NASA Astrophysics Data System (ADS)
Ranaldi, Massimo; Lelli, Matteo; Tarchini, Luca; Carapezza, Maria Luisa; Patera, Antonio
2016-04-01
High-enthalpy geothermal fields of Central Italy are hosted in deeply fractured carbonate reservoirs occurring in thermally anomalous and seismically active zones. However, the Mts. Sabatini volcanic district, located north of Rome, has an interesting deep temperatures (T), but it is characterized by low to very low seismicity and permeability in the reservoir rocks (mostly because of hydrothermal self-sealing processes). Low PCO2 facilitates the complete sealing of the reservoir fractures, preventing hot fluids rising and, determining a low CO2 flux at the surface. Conversely, high CO2 flux generally reflects a high pressure of CO2, suggesting that an active geothermal reservoir is present at depth. In Mts. Sabatini district, the Caldara of Manziana (CM) is the only zone characterized by a very high CO2 flux (188 tons/day) from a surface of 0.15 km2) considering both the diffuse and viscous CO2 emission. This suggests the likely presence of an actively degassing geothermal reservoir at depth. Emitted gas is dominated by CO2 (>97 vol.%). Triangular irregular networks (TINs) have been used to represent the morphology of the bottom of the surficial volcanic deposits, the thickness of the impervious formation and the top of the geothermal reservoir. The TINs, integrated by T-gradient and deep well data, allowed to estimate the depth and the temperature of the top of the geothermal reservoir, respectively to ~-1000 m from the surface and to ~130°C. These estimations are fairly in agreement with those obtained by gas chemistry (818
NASA Astrophysics Data System (ADS)
Parman, S. W.; Dann, J. C.; Grove, T. L.; de Wit, M. J.
1997-08-01
This paper provides new constraints on the crystallization conditions of the 3.49 Ga Barberton komatiites. The compositional evidence from igneous pyroxene in the olivine spinifex komatiite units indicates that the magma contained significant quantities of dissolved H2O. Estimates are made from comparisons of the compositions of pyroxene preserved in Barberton komatiites with pyroxene produced in laboratory experiments at 0.1 MPa (1 bar) under anhydrous conditions and at 100 and 200 MPa (1 and 2 kbar) under H2O-saturated conditions on an analog Barberton composition. Pyroxene thermobarometry on high-Ca clinopyroxene compositions from ten samples requires a range of minimum magmatic water contents of 6 wt.% or greater at the time of pyroxene crystallization and minimum emplacement pressures of 190 MPa (6 km depth). Since high-Ca pyroxene appears after 30% crystallization of olivine and spinel, the liquidus H2O contents could be 4 to 6 wt.% H2O. The liquidus temperature of the Barberton komatiite composition studied is between 1370 and 1400°C at 200 MPa under H2O-saturated conditions. When compared to the temperature-depth regime of modern melt generation environments, the komatiite mantle source temperatures are 200°C higher than the hydrous mantle melting temperatures inferred in modern subduction zone environments and 100°C higher than mean mantle melting temperatures estimated at mid-ocean ridges. When compared to previous estimates of komatiite liquidus temperatures, melting under hydrous conditions occurs at temperatures that are ˜ 250°C lower than previous estimates for anhydrous komatiite. Mantle melting by near-fractional, adiabatic decompression takes place in a melting column that spans ˜ 38 km depth range under hydrous conditions. This depth interval for melting is only slightly greater than that observed in modern mid-ocean ridge environments. In contrast, anhydrous fractional melting models of komatiite occur over a larger depth range (˜ 130 km) and place the base of the melting column into the transition zone.
NASA Astrophysics Data System (ADS)
Herrero-Gil, Andrea; Ruiz, Javier; Egea-González, Isabel; Romeo, Ignacio
2017-04-01
Lobate scarps are tectonic structures considered as the topographic expression of thrust faults. For this study we have chosen three large lobate scarps (Ogygis Rupes, Bosporos Rupes and a third unnamed one) located in Aonia Terra, in the southern hemisphere of Mars near the northeast margin of the Argyre impact basin. These lobate scarps strike parallel to the edge of Thaumasia in this area, showing a roughly arcuate to linear form and an asymmetric cross section with a steeply frontal scarp and a gently dipping back scarp. The asymmetry in the cross sections suggests that the three lobate scarps were generated by ESE-vergent thrust faults. Two complementary methods were used to analyze the faults underlying these lobate scarps based on Mars Orbiter Laser Altimeter data and the Mars imagery available: (i) analyzing topographic profiles together with the horizontal shortening estimations from cross-cut craters to create balanced cross sections on the basis of thrust fault propagation folding [1]; (ii) using a forward mechanical dislocation method [2], which predicts fault geometry by comparing model outputs with real topography. The objective is to obtain fault geometry parameters as the minimum value for the horizontal offset, dip angle and depth of faulting of each underlying fault. By comparing the results obtained by both methods we estimate a preliminary depth of faulting value between 15 and 26 kilometers for this zone between Thaumasia and Argyre basin. The significant sizes of the faults underlying these three lobate scarps suggest that their detachments are located at a main rheological change. Estimates of the depth of faulting in similar lobate scarps on Mars or Mercury [3] have been associated to the depth of the brittle-ductile transition. [1] Suppe (1983), Am. J. Sci., 283, 648-721; Seeber and Sorlien (2000), Geol. Soc. Am. Bull., 112, 1067-1079. [2] Toda et al. (1998) JGR, 103, 24543-24565. [3] i.e. Schultz and Watters (2001) Geophys. Res. Lett., 28, 4659-4662; Ruiz et al. (2008) EPSL, 270, 1-12; Egea-Gonzalez et al. (2012) PSS, 60, 193-198; Mueller et al. (2014) EPSL, 408, 100-109.
Olson, S.A.; Ayotte, J.D.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 2.5 ft. The worst-case contraction scour occurred at the incipient roadway-overtopping discharge, which was less than the 100-year discharge. The contraction scour depths do not take the concrete channel bed under the bridge into account. Abutment scour ranged from 8.7 to 18.2 ft. The worstcase abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scouredstreambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particlesize distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
Mori, J.
1991-01-01
Event record sections, which are constructed by plotting seismograms from many closely spaced earthquakes recorded on a few stations, show multiple free-surface reflections (PP, PPP, PPPP) of the P wave in the Imperial Valley. The relative timing of these arrivals is used to estimate the strength of the P-wave velocity gradient within the upper 5 km of the sediment layer. Consistent with previous studies, a velocity model with a value of 1.8 km/sec at the surface increasing linearly to 5.8 km/sec at a depth of 5.5 km fits the data well. The relative amplitudes of the P and PP arrivals are used to estimate the source depth for the aftershock distributions of the Elmore Ranch and Superstition Hills main shocks. Although the depth determination has large uncertainties, both the Elmore Ranch and Superstition Hills aftershock sequencs appear to have similar depth distribution in the range of 4 to 10 km. -Author
Modeling intracavitary heating of the uterus by means of a balloon catheter
NASA Astrophysics Data System (ADS)
Olsrud, Johan; Friberg, Britt; Rioseco, Juan; Ahlgren, Mats; Persson, Bertil R. R.
1999-01-01
Balloon thermal endometrial destruction (TED) is a recently developed method to treat heavy menstrual bleeding (menorrhagia). Numerical simulations of this treatment by use of the finite element method were performed. The mechanical deformation and the resulting stress distribution when a balloon catheter is expanded within the uterine cavity was estimated from structural analysis. Thermal analysis was then performed to estimate the depth of tissue coagulation (temperature > 55 degree(s)C) in the uterus during TED. The estimated depth of coagulation, after 30 min heating with an intracavity temperature of 75 degree(s)C, was approximately 9 mm when blood flow was disregarded. With uniform normal blood flow, the depth of coagulation decreased to 3 - 4 mm. Simulations with varying intracavity temperatures and blood flow rates showed that both parameters should be of major importance to the depth of coagulation. The influence of blood flow was less when the pressure due to the balloon was also considered (5 - 6 mm coagulation depth with normal blood flow).
Gravity profiles across the Uyaijah Ring structure, Kingdom of Saudi Arabia
Gettings, M.E.; Andreasen, G.E.
1987-01-01
The resulting structural model, based on profile fits to gravity responses of three-dimensional models and excess-mass calculations, gives a depth estimate to the base of the complex of 4.75 km. The contacts of the complex are inferred to be steeply dipping inward along the southwest margin of the structure. To the north and east, however, the basal contact of the complex dips more gently inward (about 30 degrees). The ring structure appears to be composed of three laccolith-shaped plutons; two are granitic in composition and make up about 85 percent of the volume of the complex, and one is granodioritic and comprises the remaining 15 percent. The source area for the plutons appears to be in the southwest quadrant of the Uyaijah ring structure. A northwest-trending shear zone cuts the northern half of the structure and contains mafic dikes that have a small but identifiable gravity-anomaly response. The structural model agrees with models derived from geological interpretation except that the estimated depth to which the structure extends is decreased considerably by the gravity results.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
Comparison of Soil Quality Index Using Three Methods
Mukherjee, Atanu; Lal, Rattan
2014-01-01
Assessment of management-induced changes in soil quality is important to sustaining high crop yield. A large diversity of cultivated soils necessitate identification development of an appropriate soil quality index (SQI) based on relative soil properties and crop yield. Whereas numerous attempts have been made to estimate SQI for major soils across the World, there is no standard method established and thus, a strong need exists for developing a user-friendly and credible SQI through comparison of various available methods. Therefore, the objective of this article is to compare three widely used methods to estimate SQI using the data collected from 72 soil samples from three on-farm study sites in Ohio. Additionally, challenge lies in establishing a correlation between crop yield versus SQI calculated either depth wise or in combination of soil layers as standard methodology is not yet available and was not given much attention to date. Predominant soils of the study included one organic (Mc), and two mineral (CrB, Ko) soils. Three methods used to estimate SQI were: (i) simple additive SQI (SQI-1), (ii) weighted additive SQI (SQI-2), and (iii) statistically modeled SQI (SQI-3) based on principal component analysis (PCA). The SQI varied between treatments and soil types and ranged between 0–0.9 (1 being the maximum SQI). In general, SQIs did not significantly differ at depths under any method suggesting that soil quality did not significantly differ for different depths at the studied sites. Additionally, data indicate that SQI-3 was most strongly correlated with crop yield, the correlation coefficient ranged between 0.74–0.78. All three SQIs were significantly correlated (r = 0.92–0.97) to each other and with crop yield (r = 0.65–0.79). Separate analyses by crop variety revealed that correlation was low indicating that some key aspects of soil quality related to crop response are important requirements for estimating SQI. PMID:25148036
Exchange across the sediment-water interface quantified from porewater radon profiles
NASA Astrophysics Data System (ADS)
Cook, Peter G.; Rodellas, Valentí; Andrisoa, Aladin; Stieglitz, Thomas C.
2018-04-01
Water recirculation through permeable sediments induced by wave action, tidal pumping and currents enhances the exchange of solutes and fine particles between sediments and overlying waters, and can be an important hydro-biogeochemical process. In shallow water, most of the recirculation is likely to be driven by the interaction of wave-driven oscillatory flows with bottom topography which can induce pressure fluctuations at the sediment-water interface on very short timescales. Tracer-based methods provide the most reliable means for characterizing this short-timescale exchange. However, the commonly applied approaches only provide a direct measure of the tracer flux. Estimating water fluxes requires characterizing the tracer concentration in discharging porewater; this implies collecting porewater samples at shallow depths (usually a few mm, depending on the hydrodynamic dispersivity), which is very difficult with commonly used techniques. In this study, we simulate observed vertical profiles of radon concentration beneath shallow coastal lagoons using a simple water recirculation model that allows us to estimate water exchange fluxes as a function of depth below the sediment-water interface. Estimated water fluxes at the sediment water interface at our site were 0.18-0.25 m/day, with fluxes decreasing exponentially with depth. Uncertainty in dispersivity is the greatest source of error in exchange flux, and results in an uncertainty of approximately a factor-of-five.
NASA Astrophysics Data System (ADS)
Chi, Wu-Cheng
2016-04-01
A bottom-simulating reflector (BSR), representing the base of the gas hydrate stability zone, can be used to estimate geothermal gradients under seafloor. However, to derive temperature estimates at the BSR, the correct hydrate composition is needed to calculate the phase boundary. Here we applied the method by Minshull and Keddie to constrain the hydrate composition and the pore fluid salinity. We used a 3D seismic dataset offshore SW Taiwan to test the method. Different from previous studies, we have considered the effects of 3D topographic effects using finite element modelling and also depth-dependent thermal conductivity. Using a pore water salinity of 2% at the BSR depth as found from the nearby core samples, we successfully used 99% methane and 1% ethane gas hydrate phase boundary to derive a sub-bottom depth vs. temperature plot which is consistent with the seafloor temperature from in-situ measurements. The results are also consistent with geochemical analyses of the pore fluids. The derived regional geothermal gradient is 40.1oC/km, which is similar to 40oC/km used in the 3D finite element modelling used in this study. This study is among the first documented successful use of Minshull and Keddie's method to constrain seafloor gas hydrate composition.
Theoretical performance model for single image depth from defocus.
Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme
2014-12-01
In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.
Ligorio, Gabriele; Sabatini, Angelo Maria
2015-12-19
In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.
Reconciling estimates of the ratio of heat and salt fluxes at the ice-ocean interface
NASA Astrophysics Data System (ADS)
Keitzl, T.; Mellado, J. P.; Notz, D.
2016-12-01
The heat exchange between floating ice and the underlying ocean is determined by the interplay of diffusive fluxes directly at the ice-ocean interface and turbulent fluxes away from it. In this study, we examine this interplay through direct numerical simulations of free convection. Our results show that an estimation of the interface flux ratio based on direct measurements of the turbulent fluxes can be difficult because the flux ratio varies with depth. As an alternative, we present a consistent evaluation of the flux ratio based on the total heat and salt fluxes across the boundary layer. This approach allows us to reconcile previous estimates of the ice-ocean interface conditions. We find that the ratio of heat and salt fluxes directly at the interface is 83-100 rather than 33 as determined by previous turbulence measurements in the outer layer. This can cause errors in the estimated ice-ablation rate from field measurements of up to 40% if they are based on the three-equation formulation.
Dose equivalent on the Moon contributed from cosmic rays and their secondary particles
NASA Astrophysics Data System (ADS)
Hayatsu, K.; Hareyama, Makoto; Hasebe, N.; Kobayashi, S.; Yamashita, N.
Estimation of radiation dose on and under the lunar surface is quite important for human activity on the Moon and in the future lunar bases. Radiation environment on the Moon is much different from that on the Earth. Galactic cosmic rays and solar energetic particles directly penetrate the lunar surface because of no atmosphere and no magnetic field around the Moon. Then, those generate many secondary particles such as gamma rays, neutrons and other charged particles by interaction with soils under the lunar surface. Therefore, the estimation of radiation dose from them on the surface and the underground of the Moon are essential for safety human activities. In this study the ambient dose equivalent in the ICRU sphere at the surface and various depths of the Moon is estimated based on the latest galactic cosmic ray spectrum and its generating secondary particles calculated by the Geant4 code. On the surface the most dominant contribution for the dose are not protons and heliums, but heavy components of galactic cosmic rays such as iron, while in the ground, secondary neutrons are the most dominant. In particular, the dose from neutrons becomes maximal at 50 - 100 g/cm2 of lunar soil depth, because fast neutrons with about 1.0 MeV are mostly produced at this depth and give a large dose. On the surface, the dose originated from GCR is quite sensitive for solar cycle activity, while that from secondary neutrons is not so sensitive. Inversely, under the surface, the dose from neutron is much sensitive for solar activity related to the flux of galactic cosmic rays. This difference should be considered to shield cosmic radiation for human activity on the Moon.
Amplification of postwildfire peak flow by debris
NASA Astrophysics Data System (ADS)
Kean, J. W.; McGuire, L. A.; Rengers, F. K.; Smith, J. B.; Staley, D. M.
2016-08-01
In burned steeplands, the peak depth and discharge of postwildfire runoff can substantially increase from the addition of debris. Yet methods to estimate the increase over water flow are lacking. We quantified the potential amplification of peak stage and discharge using video observations of postwildfire runoff, compiled data on postwildfire peak flow (Qp), and a physically based model. Comparison of flood and debris flow data with similar distributions in drainage area (A) and rainfall intensity (I) showed that the median runoff coefficient (C = Qp/AI) of debris flows is 50 times greater than that of floods. The striking increase in Qp can be explained using a fully predictive model that describes the additional flow resistance caused by the emergence of coarse-grained surge fronts. The model provides estimates of the amplification of peak depth, discharge, and shear stress needed for assessing postwildfire hazards and constraining models of bedrock incision.
NASA Astrophysics Data System (ADS)
Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.
2013-09-01
The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.
Amplification of postwildfire peak flow by debris
Kean, Jason W.; McGuire, Luke; Rengers, Francis K.; Smith, Joel B.; Staley, Dennis M.
2016-01-01
In burned steeplands, the peak depth and discharge of postwildfire runoff can substantially increase from the addition of debris. Yet methods to estimate the increase over water flow are lacking. We quantified the potential amplification of peak stage and discharge using video observations of postwildfire runoff, compiled data on postwildfire peak flow (Qp), and a physically based model. Comparison of flood and debris flow data with similar distributions in drainage area (A) and rainfall intensity (I) showed that the median runoff coefficient (C = Qp/AI) of debris flows is 50 times greater than that of floods. The striking increase in Qp can be explained using a fully predictive model that describes the additional flow resistance caused by the emergence of coarse-grained surge fronts. The model provides estimates of the amplification of peak depth, discharge, and shear stress needed for assessing postwildfire hazards and constraining models of bedrock incision.
The nonstationary strain filter in elastography: Part I. Frequency dependent attenuation.
Varghese, T; Ophir, J
1997-01-01
The accuracy and precision of the strain estimates in elastography depend on a myriad number of factors. A clear understanding of the various factors (noise sources) that plague strain estimation is essential to obtain quality elastograms. The nonstationary variation in the performance of the strain filter due to frequency-dependent attenuation and lateral and elevational signal decorrelation are analyzed in this and the companion paper for the cross-correlation-based strain estimator. In this paper, we focus on the role of frequency-dependent attenuation in the performance of the strain estimator. The reduction in the signal-to-noise ratio (SNRs) in the RF signal, and the center frequency and bandwidth downshift with frequency-dependent attenuation are incorporated into the strain filter formulation. Both linear and nonlinear frequency dependence of attenuation are theoretically analyzed. Monte-Carlo simulations are used to corroborate the theoretically predicted results. Experimental results illustrate the deterioration in the precision of the strain estimates with depth in a uniformly elastic phantom. Theoretical, simulation and experimental results indicate the importance of high SNRs values in the RF signals, because the strain estimation sensitivity, elastographic SNRe and dynamic range deteriorate rapidly with a decrease in the SNRs. In addition, a shift in the strain filter toward higher strains is observed at large depths in tissue due to the center frequency downshift.
Techniques for estimating flood-depth frequency relations for streams in West Virginia
Wiley, J.B.
1987-01-01
Multiple regression analyses are applied to data from 119 U.S. Geological Survey streamflow stations to develop equations that estimate baseline depth (depth of 50% flow duration) and 100-yr flood depth on unregulated streams in West Virginia. Drainage basin characteristics determined from the 100-yr flood depth analysis were used to develop 2-, 10-, 25-, 50-, and 500-yr regional flood depth equations. Two regions with distinct baseline depth equations and three regions with distinct flood depth equations are delineated. Drainage area is the most significant independent variable found in the central and northern areas of the state where mean basin elevation also is significant. The equations are applicable to any unregulated site in West Virginia where values of independent variables are within the range evaluated for the region. Examples of inapplicable sites include those in reaches below dams, within and directly upstream from bridge or culvert constrictions, within encroached reaches, in karst areas, and where streams flow through lakes or swamps. (Author 's abstract)
Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas
2017-12-01
Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.
1992-05-01
regression analysis. The strength of any one variable can be estimated along with the strength of the entire model in explaining the variance of percent... applicable a set of damage functions is to a particular situation. Sometimes depth- damage functions are embedded in computer programs which calculate...functions. Chapter Six concludes with recommended policies on the development and application of depth-damage functions. 5 6 CHAPTER TWO CONSTRUCTION OF
Distribution and depth of bottom-simulating reflectors in the Nankai subduction margin.
Ohde, Akihiro; Otsuka, Hironori; Kioka, Arata; Ashi, Juichiro
2018-01-01
Surface heat flow has been observed to be highly variable in the Nankai subduction margin. This study presents an investigation of local anomalies in surface heat flows on the undulating seafloor in the Nankai subduction margin. We estimate the heat flows from bottom-simulating reflectors (BSRs) marking the lower boundaries of the methane hydrate stability zone and evaluate topographic effects on heat flow via two-dimensional thermal modeling. BSRs have been used to estimate heat flows based on the known stability characteristics of methane hydrates under low-temperature and high-pressure conditions. First, we generate an extensive map of the distribution and subseafloor depths of the BSRs in the Nankai subduction margin. We confirm that BSRs exist at the toe of the accretionary prism and the trough floor of the offshore Tokai region, where BSRs had previously been thought to be absent. Second, we calculate the BSR-derived heat flow and evaluate the associated errors. We conclude that the total uncertainty of the BSR-derived heat flow should be within 25%, considering allowable ranges in the P-wave velocity, which influences the time-to-depth conversion of the BSR position in seismic images, the resultant geothermal gradient, and thermal resistance. Finally, we model a two-dimensional thermal structure by comparing the temperatures at the observed BSR depths with the calculated temperatures at the same depths. The thermal modeling reveals that most local variations in BSR depth over the undulating seafloor can be explained by topographic effects. Those areas that cannot be explained by topographic effects can be mainly attributed to advective fluid flow, regional rapid sedimentation, or erosion. Our spatial distribution of heat flow data provides indispensable basic data for numerical studies of subduction zone modeling to evaluate margin parallel age dependencies of subducting plates.
NASA Astrophysics Data System (ADS)
Hayden, T. G.; Kominz, M. A.; Magens, D.; Niessen, F.
2009-12-01
We have estimated ice thicknesses at the AND-1B core during the Last Glacial Maximum by adapting an existing technique to calculate overburden. As ice thickness at Last Glacial Maximum is unknown in existing ice sheet reconstructions, this analysis provides constraint on model predictions. We analyze the porosity as a function of depth and lithology from measurements taken on the AND-1B core, and compare these results to a global dataset of marine, normally compacted sediments compiled from various legs of ODP and IODP. Using this dataset we are able to estimate the amount of overburden required to compact the sediments to the porosity observed in AND-1B. This analysis is a function of lithology, depth and porosity, and generates estimates ranging from zero to 1,000 meters. These overburden estimates are based on individual lithologies, and are translated into ice thickness estimates by accounting for both sediment and ice densities. To do this we use a simple relationship of Xover * (ρsed/ρice) = Xice; where Xover is the overburden thickness, ρsed is sediment density (calculated from lithology and porosity), ρice is the density of glacial ice (taken as 0.85g/cm3), and Xice is the equalivant ice thickness. The final estimates vary considerably, however the “Best Estimate” behavior of the 2 lithologies most likely to compact consistently is remarkably similar. These lithologies are the clay and silt units (Facies 2a/2b) and the diatomite units (Facies 1a) of AND-1B. These lithologies both produce best estimates of approximately 1,000 meters of ice during Last Glacial Maximum. Additionally, while there is a large range of possible values, no combination of reasonable lithology, compaction, sediment density, or ice density values result in an estimate exceeding 1,900 meters of ice. This analysis only applies to ice thicknesses during Last Glacial Maximum, due to the overprinting effect of Last Glacial Maximum on previous ice advances. Analysis of the AND-2A core is underway, and results will be compared to those of AND-1B.
Determining Accuracy of Thermal Dissipation Methods-based Sap Flux in Japanese Cedar Trees
NASA Astrophysics Data System (ADS)
Su, Man-Ping; Shinohara, Yoshinori; Laplace, Sophie; Lin, Song-Jin; Kume, Tomonori
2017-04-01
Thermal dissipation method, one kind of sap flux measurement method that can estimate individual tree transpiration, have been widely used because of its low cost and uncomplicated operation. Although thermal dissipation method is widespread, the accuracy of this method is doubted recently because some tree species materials in previous studies were not suitable for its empirical formula from Granier due to difference of wood characteristics. In Taiwan, Cryptomeria japonica (Japanese cedar) is one of the dominant species in mountainous area, quantifying the transpiration of Japanese cedar trees is indispensable to understand water cycling there. However, no one have tested the accuracy of thermal dissipation methods-based sap flux for Japanese cedar trees in Taiwan. Thus, in this study we conducted calibration experiment using twelve Japanese cedar stem segments from six trees to investigate the accuracy of thermal dissipation methods-based sap flux in Japanese cedar trees in Taiwan. By pumping water from segment bottom to top and inserting probes into segments to collect data simultaneously, we compared sap flux densities calculated from real water uptakes (Fd_actual) and empirical formula (Fd_Granier). Exact sapwood area and sapwood depth of each sample were obtained from dying segment with safranin stain solution. Our results showed that Fd_Granier underestimated 39 % of Fd_actual across sap flux densities ranging from 10 to 150 (cm3m-2s-1); while applying sapwood depth corrected formula from Clearwater, Fd_Granier became accurately that only underestimated 0.01 % of Fd_actual. However, when sap flux densities ranging from 10 to 50 (cm3m-2s-1)which is similar with the field data of Japanese cedar trees in a mountainous area of Taiwan, Fd_Granier underestimated 51 % of Fd_actual, and underestimated 26 % with applying Clearwater sapwood depth corrected formula. These results suggested sapwood depth significantly impacted on the accuracy of thermal dissipation method; hence, careful determination of sapwood depth is the key for the accurate transpiration estimates. This study also apply the derived results to long-term field data in the mountainous area in Taiwan.
NASA Astrophysics Data System (ADS)
Houpert, Loïc; Testor, Pierre; Durrieu de Madron, Xavier; Somot, Samuel; D'Ortenzio, Fabrizio; Estournel, Claude; Lavigne, Héloïse
2014-05-01
We present a relatively high resolution Mediterranean climatology (0.5°x0.5°x12 months) of the seasonal thermocline based on a comprehensive collection of temperature profiles of the last 44 years (1969-2012). The database includes more than 190,000 profiles, merging CTD, XBT, profiling floats, and gliders observations. This data set is first used to describe the seasonal cycle of the mixed layer depth and of the seasonal thermocline and on the whole Mediterranean on a monthly climatological basis. Our analysis discriminates several regions with coherent behaviors, in particular the deep water formation sites, characterized by significant differences in the winter mixing intensity. Heat Storage Rate (HSR) is calculated as the time rate of change of the heat content due to variations in the temperature integrated from the surface down to the base of the seasonal thermocline. Heat Entrainment Rate (HER) is calculated as the time rate of change of the heat content due to the deepening of thermocline base. We propose a new independent estimate of the seasonal cycle of the Net surface Heat Flux, calculated on average over the Mediterranean Sea for the 1979-2011 period, based only on in-situ observations. We used our new climatologies of HSR and of HER, combined to existing climatology of the horizontal heat flux at Gibraltar Strait. Although there is a good agreement between our estimation of NHF, from observations, with modeled NHF, some differences may be noticed during specific periods. A part of these differences may be explained by the high temporal and spatial variability of the Mixed Layer Depth and of the seasonal thermocline, responsible for very localized heat transfer in the ocean.
Increased depth-diameter ratios in the Medusae Fossae Formation deposits of Mars
NASA Technical Reports Server (NTRS)
Barlow, N. G.
1993-01-01
Depth to diameter ratios for fresh impact craters on Mars are commonly cited as approximately 0.2 for simple craters and 0.1 for complex craters. Recent computation of depth-diameter ratios in the Amazonis-Memnonia region of Mars indicates that craters within the Medusae Fossae Formation deposits found in this region display greater depth-diameter ratios than expected for both simple and complex craters. Photoclinometric and shadow length techniques have been used to obtain depths of craters within the Amazonis-Memnonia region. The 37 craters in the 2 to 29 km diameter range and displaying fresh impact morphologies were identified in the area of study. This region includes the Amazonian aged upper and middle members of the Medusae Fossae Formation and Noachian aged cratered and hilly units. The Medusae Fossae Formation is characterized by extensive, flat to gently undulating deposits of controversial origin. These deposits appear to vary from friable to indurated. Early analysis of crater degradation in the Medusae Fossae region suggested that simple craters excavated to greater depths than expected based on the general depth-diameter relationships derived for Mars. However, too few craters were available in the initial analysis to estimate the actual depth-diameter ratios within this region. Although the analysis is continuing, we are now beginning to see a convergence towards specific values for the depth-diameter ratio depending on geologic unit.
Buell, Gary R.; Markewich, Helaine W.
2004-01-01
U.S. Geological Survey investigations of environmental controls on carbon cycling in soils and sediments of the Mississippi River Basin (MRB), an area of 3.3 x 106 square kilometers (km2), have produced an assessment tool for estimating the storage and inventory of soil organic carbon (SOC) by using soil-characterization data from Federal, State, academic, and literature sources. The methodology is based on the linkage of site-specific SOC data (pedon data) to the soil-association map units of the U.S. Department of Agriculture State Soil Geographic (STATSGO) and Soil Survey Geographic (SSURGO) digital soil databases in a geographic information system. The collective pedon database assembled from individual sources presently contains 7,321 pedon records representing 2,581 soil series. SOC storage, in kilograms per square meter (kg/m2), is calculated for each pedon at standard depth intervals from 0 to 10, 10 to 20, 20 to 50, and 50 to 100 centimeters. The site-specific storage estimates are then regionalized to produce national-scale (STATSGO) and county-scale (SSURGO) maps of SOC to a specified depth. Based on this methodology, the mean SOC storage for the top meter of mineral soil in the MRB is approximately 10 kg/m2, and the total inventory is approximately 32.3 Pg (1 petagram = 109 metric tons). This inventory is from 2.5 to 3 percent of the estimated global mineral SOC pool.
Stephenson, William J.; Odum, Jackson K.; McNamara, Daniel E.; Williams, Robert A.; Angster, Stephen J
2014-01-01
We characterize shear-wave velocity versus depth (Vs profile) at 16 portable seismograph sites through the epicentral region of the 2011 Mw 5.8 Mineral (Virginia, USA) earthquake to investigate ground-motion site effects in the area. We used a multimethod acquisition and analysis approach, where active-source horizontal shear (SH) wave reflection and refraction as well as active-source multichannel analysis of surface waves (MASW) and passive-source refraction microtremor (ReMi) Rayleigh wave dispersion were interpreted separately. The time-averaged shear-wave velocity to a depth of 30 m (Vs30), interpreted bedrock depth, and site resonant frequency were estimated from the best-fit Vs profile of each method at each location for analysis. Using the median Vs30 value (270–715 m/s) as representative of a given site, we estimate that all 16 sites are National Earthquake Hazards Reduction Program (NEHRP) site class C or D. Based on a comparison of simplified mapped surface geology to median Vs30 at our sites, we do not see clear evidence for using surface geologic units as a proxy for Vs30 in the epicentral region, although this may primarily be because the units are similar in age (Paleozoic) and may have similar bulk seismic properties. We compare resonant frequencies calculated from ambient noise horizontal:vertical spectral ratios (HVSR) at available sites to predicted site frequencies (generally between 1.9 and 7.6 Hz) derived from the median bedrock depth and average Vs to bedrock. Robust linear regression of HVSR to both site frequency and Vs30 demonstrate moderate correlation to each, and thus both appear to be generally representative of site response in this region. Based on Kendall tau rank correlation testing, we find that Vs30 and the site frequency calculated from average Vs to median interpreted bedrock depth can both be considered reliable predictors of weak-motion site effects in the epicentral region.
Lithospheric bending at subduction zones based on depth soundings and satellite gravity
NASA Technical Reports Server (NTRS)
Levitt, Daniel A.; Sandwell, David T.
1995-01-01
A global study of trench flexure was performed by simultaneously modeling 117 bathymetric profiles (original depth soundings) and satellite-derived gravity profiles. A thin, elastic plate flexure model was fit to each bathymetry/gravity profile by minimization of the L(sub 1) norm. The six model parameters were regional depth, regional gravity, trench axis location, flexural wavelength, flexural amplitude, and lithospheric density. A regional tilt parameter was not required after correcting for age-related trend using a new high-resolution age map. Estimates of the density parameter confirm that most outer rises are uncompensated. We find that flexural wavelength is not an accurate estimate of plate thickness because of the high curvatures observed at a majority of trenches. As in previous studies, we find that the gravity data favor a longer-wavelength flexure than the bathymetry data. A joint topography-gravity modeling scheme and fit criteria are used to limit acceptable parameter values to models for which topography and gravity yield consistent results. Even after the elastic thicknesses are converted to mechanical thicknesses using the yield strength envelope model, residual scatter obscures the systematic increase of mechanical thickness with age; perhaps this reflects the combination of uncertainties inherent in estimating flexural wavelength, such as extreme inelastic bending and accumulated thermoelastic stress. The bending moment needed to support the trench and outer rise topography increases by a factor of 10 as lithospheric age increases from 20 to 150 Ma; this reflects the increase in saturation bending moment that the lithosphere can maintain. Using a stiff, dry-olivine rheology, we find that the lithosphere of the GDH1 thermal model (Stein and Stein, 1992) is too hot and thin to maintain the observed bending moments. Moreover, the regional depth seaward of the oldest trenches (approximately 150 Ma) exceeds the GDH1 model depths by about 400 m.
Prediction of Scour Depth around Bridge Piers using Adaptive Neuro-Fuzzy Inference Systems (ANFIS)
NASA Astrophysics Data System (ADS)
Valyrakis, Manousos; Zhang, Hanqing
2014-05-01
Earth's surface is continuously shaped due to the action of geophysical flows. Erosion due to the flow of water in river systems has been identified as a key problem in preserving ecological health of river systems but also a threat to our built environment and critical infrastructure, worldwide. As an example, it has been estimated that a major reason for bridge failure is due to scour. Even though the flow past bridge piers has been investigated both experimentally and numerically, and the mechanisms of scouring are relatively understood, there still lacks a tool that can offer fast and reliable predictions. Most of the existing formulas for prediction of bridge pier scour depth are empirical in nature, based on a limited range of data or for piers of specific shape. In this work, the application of a Machine Learning model that has been successfully employed in Water Engineering, namely an Adaptive Neuro-Fuzzy Inference System (ANFIS) is proposed to estimate the scour depth around bridge piers. In particular, various complexity architectures are sequentially built, in order to identify the optimal for scour depth predictions, using appropriate training and validation subsets obtained from the USGS database (and pre-processed to remove incomplete records). The model has five variables, namely the effective pier width (b), the approach velocity (v), the approach depth (y), the mean grain diameter (D50) and the skew to flow. Simulations are conducted with data groups (bed material type, pier type and shape) and different number of input variables, to produce reduced complexity and easily interpretable models. Analysis and comparison of the results indicate that the developed ANFIS model has high accuracy and outstanding generalization ability for prediction of scour parameters. The effective pier width (as opposed to skew to flow) is amongst the most relevant input parameters for the estimation.
Haro, Alexander J.; Mulligan, Kevin; Suro, Thomas P.; Noreika, John; McHugh, Amy
2017-10-16
Recent efforts to advance river connectivity for the Millstone River watershed in New Jersey have led to the evaluation of a low-flow gauging weir that spans the full width of the river. The methods and results of a desktop modelling exercise were used to evaluate the potential ability of three anadromous fish species (Alosa sapidissima [American shad], Alosa pseudoharengus [alewife], and Alosa aestivalis [blueback herring]) to pass upstream over the U.S. Geological Survey Blackwells Mills streamgage (01402000) and weir on the Millstone River, New Jersey, at various streamflows, and to estimate the probability that the weir will be passable during the spring migratory season. Based on data from daily fishway counts downstream from the Blackwells Mills streamgage and weir between 1996 and 2014, the general migratory period was defined as April 14 to May 28. Recorded water levels and flow data were used to theoretically estimate water depths and velocities over the weir, as well as flow exceedances occurring during the migratory period.Results indicate that the weir is a potential depth barrier to fish passage when streamflows are below 200 cubic feet per second using a 1-body-depth criterion for American shad (the largest fish among the target species). Streamflows in that range occur on average 35 percent of the time during the migratory period. An increase of the depth criterion to 2 body depths causes the weir to become a possible barrier to passage when flows are below 400 cubic feet per second. Streamflows in that range occur on average 73 percent of the time during the migration season. Average cross-sectional velocities at several points along the weir do not seem to be limiting to the fish migration, but maximum theoretical velocities estimated without friction loss over the face of the weir could be potentially limiting.
NASA Astrophysics Data System (ADS)
Abbaszadeh Afshar, Farideh; Ayoubi, Shamsollah; Besalatpour, Ali Asghar; Khademi, Hossein; Castrignano, Annamaria
2016-03-01
This study was conducted to estimate soil clay content in two depths using geophysical techniques (Ground Penetration Radar-GPR and Electromagnetic Induction-EMI) and ancillary variables (remote sensing and topographic data) in an arid region of the southeastern Iran. GPR measurements were performed throughout ten transects of 100 m length with the line spacing of 10 m, and the EMI measurements were done every 10 m on the same transect in six sites. Ten soil cores were sampled randomly in each site and soil samples were taken from the depth of 0-20 and 20-40 cm, and then the clay fraction of each of sixty soil samples was measured in the laboratory. Clay content was predicted using three different sets of properties including geophysical data, ancillary data, and a combination of both as inputs to multiple linear regressions (MLR) and decision tree-based algorithm of Chi-Squared Automatic Interaction Detection (CHAID) models. The results of the CHAID and MLR models with all combined data showed that geophysical data were the most important variables for the prediction of clay content in two depths in the study area. The proposed MLR model, using the combined data, could explain only 0.44 and 0.31% of the total variability of clay content in 0-20 and 20-40 cm depths, respectively. Also, the coefficient of determination (R2) values for the clay content prediction, using the constructed CHAID model with the combined data, was 0.82 and 0.76 in 0-20 and 20-40 cm depths, respectively. CHAID models, therefore, showed a greater potential in predicting soil clay content from geophysical and ancillary data, while traditional regression methods (i.e. the MLR models) did not perform as well. Overall, the results may encourage researchers in using georeferenced GPR and EMI data as ancillary variables and CHAID algorithm to improve the estimation of soil clay content.
NASA Astrophysics Data System (ADS)
Ibraheem, Ismael M.; Elawadi, Eslam A.; El-Qady, Gad M.
2018-03-01
The Wadi El Natrun area in Egypt is located west of the Nile Delta on both sides of the Cairo-Alexandria desert road, between 30°00‧ and 30°40‧N latitude, and 29°40‧ and 30°40‧E longitude. The name refers to the NW-SE trending depression located in the area and containing lakes that produce natron salt. In spite of the area is promising for oil and gas exploration as well as agricultural projects, Geophysical studies carried out in the area is limited to the regional seismic surveys accomplished by oil companies. This study presents the interpretation of the airborne magnetic data to map the structure architecture and depth to the basement of the study area. This interpretation was facilitated by applying different data enhancement and processing techniques. These techniques included filters (regional-residual separation), derivatives and depth estimation using spectral analysis and Euler deconvolution. The results were refined using 2-D forward modeling along three profiles. Based on the depth estimation techniques, the estimated depth to the basement surface, ranges from 2.25 km to 5.43 km while results of the two-dimensional forward modeling show that the depth of the basement surface ranges from 2.2 km to 4.8 km. The dominant tectonic trends in the study area at deep levels are NW (Suez Trend), NNW, NE, and ENE (Syrian Arc System trend). The older ENE trend, which dominates the northwestern desert is overprinted in the study area by relatively recent NW and NE trends, whereas the tectonic trends at shallow levels are NW, ENE, NNE (Aqaba Trend), and NE. The predominant structure trend for both deep and shallow structures is the NW trend. The results of this study can be used to better understand deep-seated basement structures and to support decisions with regard to the development of agriculture, industrial areas, as well as oil and gas exploration in northern Egypt.
NASA Astrophysics Data System (ADS)
Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode
2018-06-01
The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.
NASA Astrophysics Data System (ADS)
Bártová, H.; Trojek, T.; Johnová, K.
2017-11-01
This article describes the method for the estimation of depth distribution of radionuclides in a material with gamma-ray spectrometry, and the identification of a layered structure of a material with X-ray fluorescence analysis. This method is based on the measurement of a ratio of two gamma or X-ray lines of a radionuclide or a chemical element, respectively. Its principle consists in different attenuation coefficient for these two lines in a measured material. The main aim of this investigation was to show how the detected ratio of these two lines depends on depth distribution of an analyte and mainly how this ratio depends on density and chemical composition of measured materials. Several different calculation arrangements were made and a lot of Monte Carlo simulation with the code MCNP - Monte Carlo N-Particle (Briesmeister, 2000) was performed to answer these questions. For X-ray spectrometry, the calculated Kα/Kβ diagrams were found to be almost independent upon matrix density and composition. Thanks to this phenomenon it would be possible to draw only one Kα/Kβ diagram for an element whose depth distribution is examined.
NASA Astrophysics Data System (ADS)
Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode
2018-04-01
The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.
NASA Astrophysics Data System (ADS)
Cochachin, Alejo; Huggel, Christian; Salazar, Cesar; Haeberli, Wilfried; Frey, Holger
2015-04-01
Over timescales of hundreds to thousands of years ice masses in mountains produced erosion in bedrock and subglacial sediment, including the formation of overdeepenings and large moraine dams that now serve as basins for glacial lakes. Satellite based studies found a total of 8355 glacial lakes in Peru, whereof 830 lakes were observed in the Cordillera Blanca. Some of them have caused major disasters due to glacial lake outburst floods in the past decades. On the other hand, in view of shrinking glaciers, changing water resources, and formation of new lakes, glacial lakes could have a function as water reservoirs in the future. Here we present unprecedented bathymetric studies of 124 glacial lakes in the Cordillera Blanca, Huallanca, Huayhuash and Raura in the regions of Ancash, Huanuco and Lima. Measurements were carried out using a boat equipped with GPS, a total station and an echo sounder to measure the depth of the lakes. Autocad Civil 3D Land and ArcGIS were used to process the data and generate digital topographies of the lake bathymetries, and analyze parameters such as lake area, length and width, and depth and volume. Based on that, we calculated empirical equations for mean depth as related to (1) area, (2) maximum length, and (3) maximum width. We then applied these three equations to all 830 glacial lakes of the Cordillera Blanca to estimate their volumes. Eventually we used three relations from the literature to assess the peak discharge of potential lake outburst floods, based on lake volumes, resulting in 3 x 3 peak discharge estimates. In terms of lake topography and geomorphology results indicate that the maximum depth is located in the center part for bedrock lakes, and in the back part for lakes in moraine material. Best correlations are found for mean depth and maximum width, however, all three empirical relations show a large spread, reflecting the wide range of natural lake bathymetries. Volumes of the 124 lakes with bathymetries amount to 0.9 km3 while the volume of all glacial lakes of the Cordillera Blanca ranges between 1.15 and 1.29 km3. The small difference in volume of the large lake sample as compared to the smaller sample of bathymetrically surveyed lakes is due to the large size of the measured lakes. The different distributions for lake volume and peak discharge indicate the range of variability in such estimates, and provides valuable first-order information for management and adaptation efforts in the field of water resources and flood prevention.
Depth of interaction decoding of a continuous crystal detector module.
Ling, T; Lewellen, T K; Miyaoka, R S
2007-04-21
We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability.
NASA Astrophysics Data System (ADS)
Delbari, Masoomeh; Sharifazari, Salman; Mohammadi, Ehsan
2018-02-01
The knowledge of soil temperature at different depths is important for agricultural industry and for understanding climate change. The aim of this study is to evaluate the performance of a support vector regression (SVR)-based model in estimating daily soil temperature at 10, 30 and 100 cm depth at different climate conditions over Iran. The obtained results were compared to those obtained from a more classical multiple linear regression (MLR) model. The correlation sensitivity for the input combinations and periodicity effect were also investigated. Climatic data used as inputs to the models were minimum and maximum air temperature, solar radiation, relative humidity, dew point, and the atmospheric pressure (reduced to see level), collected from five synoptic stations Kerman, Ahvaz, Tabriz, Saghez, and Rasht located respectively in the hyper-arid, arid, semi-arid, Mediterranean, and hyper-humid climate conditions. According to the results, the performance of both MLR and SVR models was quite well at surface layer, i.e., 10-cm depth. However, SVR performed better than MLR in estimating soil temperature at deeper layers especially 100 cm depth. Moreover, both models performed better in humid climate condition than arid and hyper-arid areas. Further, adding a periodicity component into the modeling process considerably improved the models' performance especially in the case of SVR.